Enterprise Cloud Infrastructure

Aunalytics Cloud in Top 4% Among Global VMware Cloud Providers

VMware Cloud VerifiedSouth Bend, IN, May 8, 2020 – Aunalytics has earned VMware’s Cloud Verified designation, signifying Cloud services that offer the complete value of VMware’s cloud infrastructure. Only 169 of nearly 4,500 VMware Cloud partners in the world have earned the Cloud Verified designation. Aunalytics is recognized for maintaining the highest level of consistency for infrastructure, operations, and developer experience.

“The Aunalytics Cloud offers advanced analytics, production and disaster recovery services utilizing the strengths of the VMware Cloud platform. VMware’s endorsement of the Aunalytics Cloud attests to the overwhelming technical proficiency of our Cloud and corporate data centers” said Aunalytics Chief Operating Officer Terry Gour. “In addition to our partnership with VMware, it’s our people that set us apart from other Cloud providers; working side-by-side throughout every step of your digital transformation. With Aunalytics you get world class technology while gaining access to an on-demand team ready to help you transform your organization.”

With VMware software, organizations have the freedom to build and deploy modern applications, migrate seamlessly between environments, and ensure that all data and applications remain secure and protected. Pairing the world’s leading Cloud infrastructure software with Aunalytics’ human intelligence provides an end-to-end data solution that can be customized to an organization’s unique requirements.

Cloud Verified providers support a distributed workforce with scale on demand capacity with services that are:

  • Scalable – elastic infrastructure that supports the dynamic needs of today’s cloud computing environments
  • Efficient – run, manage, connect, and secure applications across clouds and devices in a common operating environment
  • Optimized – rely on your team’s existing skill sets leveraging the consistent infrastructure and operations of the VMware Cloud Infrastructure

For more details and information about Aunalytics’ Cloud Verified designation, please visit https://cloud.vmware.com/providers/cloud-providers/Aunalytics.

About Aunalytics

Aunalytics brings together the best of human intelligence and leading digital technologies. Through a seamless integration of tailored IT initiatives, secure cloud infrastructure, and advanced analytics, we act as a catalyst for human and organizational thriving. With over 185 employees between our South Bend Indiana and Kalamazoo Michigan locations, we work closely with your team throughout every step of your digital journey.

Contact


Financial Services

Six Stages of Digital Transformation for Financial Institutions

Many financial institutions have been around for decades. They’ve successfully implemented the technology advances necessary to stay relevant, such as using the latest core systems and implementing digital banking services for their customers. However, the journey to a complete digital transformation involves both technical changes as well as strategic and organizational changes. To truly embrace technology and prepare for the future, each financial organization must embark on a multi-phase process to reach their full potential. We have outlined the six stages of this transformation to give banks and credit unions a high-level roadmap of what needs to occur in order to realize a complete digital transformation.

1 | Business as Usual

In the first stage of digital transformation, banks and credit unions are still maintaining the status quo rather than experimenting with new digital initiatives. Some are still hosting their business applications internally and are spending significant amounts of time performing required compliance audits. They manually compile reports using pivot tables in Excel or other spreadsheet programs. While each department has its own reports, there is little to no aggregation of data across multiple departments; only a manually created deck assembled and shared once a month. This means that they are limited to basic reporting metrics rather than deep analytics.

While they may discover some insights in their current data metrics, the insights that are gleaned from these manual reports may not be easily acted upon. Small projects may be taken on by individual departments, but there are no formal processes, and these projects are largely siloed from one another. Many times, the IT department “owns” the digital initiatives, and they tend to be tech-first rather than customer-first. Therefore, organizations are unlikely to see any significant outcomes from the small, one-off projects that are taking place during this stage, and they do not drive the overall business strategy.

2 | Present & Active

As the technology landscape evolves, it can be easy to get excited about new products and services that promise to revolutionize the field. But many banks and credit unions are not ready to go all-in until these claims are tested. Oftentimes, an experimental pilot project will get off the ground within one department. For example, they may start pulling new operational reports out of their core banking system, utilize very basic customer segmentation for a marketing campaign, or consider moving to a cloud-based system to host some of their internal applications.

However, their data continues to be siloed, insights and best practices around new technology initiatives are not shared throughout the organization, and there is little to no executive-level involvement. However, for most banks and credit unions, dabbling in new strategies and technologies is the first step to creating a sense of excitement and building a case for digital transformation on a larger scale.

3 | Formalized

As banks and credit unions begin to see momentum build from existing pilot programs, it is increasingly easier to justify investments into new digital initiatives. In the past, marketing, customer care, and transaction core databases had been siloed; separate reporting for each was the norm. However, in the pursuit of digital transformation, moving internal applications to the cloud is an important milestone on the path to creating a single source of truth and making data accessible across the enterprise.

At this stage, a clearer path toward digital transformation emerges for the bank or credit union. More formalized experimentation begins, including greater collaboration between departments and the involvement of executive sponsors. The development of technology roadmaps, including plans to move systems to the cloud and expand internal or external managed IT and security services, ensures that the bank is strategically positioned to advance its digital initiatives.

4 | Strategic

The pace really picks up in the next stage as collaboration across business units increases and the C-suite becomes fully engaged in the digital transformation process. This allows banks and credit unions to focus on long-term strategy by putting together a multi-year roadmap for digital efforts. This is the stage where an omni-channel approach to the customer journey becomes realistic, leading to a unified customer experience across all touch points—both physical and digital. Technology is no longer implemented for the sake of an upgrade, but rather, to solve specific business challenges.

However, some challenges may present themselves at this stage. As data is more freely accessible, the quality and accuracy of the data itself may be called into question. This accentuates the need for a strategic data governance plan for the bank or credit union as a whole.

5 | Converged

Once digital champions have permeated both the executive team and the majority of business units, it becomes necessary to create a governing body or “Center of Excellence” focused specifically on digital transformation initiatives and data governance across the company. This structure eliminates repetitive tasks and roles, and allows for a unified roadmap, shared best practices, and the development of a single bank-wide digital culture and vision.

Banks and credit unions can further refine their omni-channel approach to optimizing the customer experience by creating customer journey maps for each segment. This leads to optimization of every touchpoint along the way, both digital and traditional, and informs the overall business strategy. Marketing can start to run and track highly personalized campaigns for current customers and new customers.

At this point, one-off tools are abandoned in favor of an all-encompassing cloud analytics platform to gather, house, join, and clean data in order to deliver relevant, up-to-date insights. All employees are trained on digital strategy, and new hires are screened for their ability to contribute in a digitally transformed environment. In the Converged stage, digital transformation touches every aspect of the business.

6 | Innovative & Adaptive

The final stage of the digital transformation journey can be defined by continued experimentation and innovation, which, by now, is a part of the organization’s DNA. Investment in the right people, processes, and platforms optimize both customer and employee experiences, as well as operations of the bank or credit union as a whole.

Through the Center of Excellence group, pilot projects are tested, measured, and rolled out, providing a continuous stream of innovation. The data, reporting, and analytics capabilities of the omni-channel cloud analytics platform are integrated across every department, spreading from Marketing into Sales, Service, and HR, among others. Full personalization of marketing campaigns target customers that have triggers in their checking, mortgage, or wealth management accounts, or through actions taken via customer care or app. This allows the bank or credit union to make relevant recommendations on products such as loans, refi, wealth, etc.

Training programs are set up to bring all employees up to speed on the iteration and innovation cycle, and HR is closely involved in filling the talent gaps. Financial institutions may partner with or invest in startups to further advance their technology and innovation initiatives.

Conclusion

By embracing new technologies and setting up the processes necessary for a complete digital transformation, banks and credit unions are able to personalize the customer experience, enhance and streamline operations, and stay nimble in the face of changing times. No matter where your organization currently falls on this journey, your partners at Aunalytics will help you reach your ultimate goals by providing technology, people, and processes that can take your bank or credit union to the next level.

This article was inspired by research by Altimeter, as summarized in “The Six Stages of Digital Transformation” which can be downloaded here.


Aunalytics logo

Coronavirus Response

Aunalytics Coronavirus Response

At Aunalytics, our clients’ success and our teams’ safety is a top priority. In times of natural disaster and other unexpected or challenging events, we have plans, processes and teams in place to ensure our services remain available and our team is protected. This approach should allow our clients to focus on their critical business goals and the well-being of their workforce.

Therefore, we find it appropriate at this time to share the following update:

COVID-19 Business Continuity and Crisis Management

We do not foresee any impact to the delivery of any Aunalytics services due to COVID-19, and as always, we are committed to keeping our solutions available and running for our clients regardless of their location.

Our cloud-based, multi-tenant architecture model and distributed data center approach is designed to operate without any service disruptions. We also have proactively formed an internal Coronavirus Response Team to mitigate any potential disruptions.

Working Remotely

Aunalytics team members have the resources and tools they need to perform their jobs securely from any location. Therefore, we are recommending that our team members do not travel to client worksites unless absolutely necessary. In the event that an onsite presence is required, we will work together with our clients to ensure we are putting safety first for all individuals.

To make determinations around remote work and office closures, we carefully monitor and consider advice from the World Health Organization, the Centers for Disease Control, the U.S. Department of State, and government and health officials in local communities where our team members live and work.

Business Continuity Planning

As a cloud-based company with a dispersed team and flexible, remote workforce, we are prepared to virtually manage business continuity challenges. Our approach is designed to ensure that our services remain available to our clients during natural disasters or other unexpected and challenging events. Our business continuity planning is also intended to ensure we have open channels for timely communications with our clients and other stakeholders.

Service Operations

Our cloud services are designed with a high degree of redundancy and geographic fail-over capabilities to reduce the likelihood of significant impact. We maintain policies and procedures for responding to emergencies, including a disaster recovery plan. Aunalytics has data centers and technical support located in multiple locations to reduce the impact of a regional or local disruption.

We sincerely appreciate your faith in us and take the responsibility to our clients and team members very seriously. We will continue to provide updates as warranted.

Sincerely,

Richard F. Carlton
President

Terry A. Gour
COO


Google Chrome Critical Update 10-31-2019

Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could allow for arbitrary code execution. Google Chrome is a web browser used to access the Internet. Successful exploitation of the most severe vulnerabilities could allow an attacker to execute arbitrary code in the context of the browser. Depending on the privileges associated with the application, an attacker could install programs; view, change, or delete data; or create new accounts with full user rights. If this application has been configured to have fewer user rights on the system, exploitation of the most severe of these vulnerabilities could have less impact than if it was configured with administrative rights.

How to check for an update

  1. On your computer, open Chrome.
  2. At the top right, click More. More.
  3. Click Help And then About Google Chrome.
    • Chrome will check for updates when you’re on this page.
    • To apply any available updates, click Relaunch.

THREAT INTELLIGENCE:

There are reports that an exploit for CVE-2019-13720 exists in the wild.

SYSTEMS AFFECTED:

  • Google Chrome versions prior to 78.0.3904.87

RISK:

Government:

  • Large and medium government entities: High
  • Small government entities: Medium

Businesses:

  • Large and medium business entities: High
  • Small business entities: Medium

Home users: Low

TECHNICAL SUMMARY:

Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could result in arbitrary code execution. These vulnerabilities can be exploited if a user visits, or is redirected to, a specially crafted web page. Details of the vulnerabilities are as follows:

  • Use-after-free in audio (CVE-2019-13720)
  • Use-after-free in PDFium (CVE-2019-13721)

Successful exploitation of the most severe of these vulnerabilities could allow an attacker to execute arbitrary code in the context of the browser, obtain sensitive information, bypass security restrictions and perform unauthorized actions, or cause denial-of-service conditions.

RECOMMENDATIONS:

We recommend the following actions be taken:

  • Apply the stable channel update provided by Google to vulnerable systems immediately after appropriate testing.
  • Run all software as a non-privileged user (one without administrative privileges) to diminish the effects of a successful attack.
  • Remind users not to visit un-trusted websites or follow links provided by unknown or un-trusted sources.
  • Inform and educate users regarding the threats posed by hypertext links contained in emails or attachments especially from un-trusted sources.
  • Apply the Principle of Least Privilege to all systems and services.

REFERENCES:

Google:

https://chromereleases.googleblog.com/2019/10/stable-channel-update-for-desktop_31.html


Customer Intelligence

Artificial Intelligence, Machine Learning, and Deep Learning⁠

What Exactly is "Artificial Intelligence"?

If you use an automated assistant, make a simple Google search, get recommendations on Netflix or Amazon, or find a great deal in your inbox, then you will have interacted with AI (Artificial Intelligence). Indeed, it seems that every company and service today is incorporating AI in some way or another. But let’s dissect what the phrase ‘Artificial Intelligence’ means.

Most people would agree that AI is not so advanced that these companies would have Rosie from The Jetsons analyzing customer data or Skynet making product recommendations on their store page. And on the other end, at some level it is commonly understood that AI is more complex than simple business rules and nested ‘if this, then that’ logical statements.

Things start to get murky when other phrases, often conflated with AI, are added to the mix. Amongst these terms are Machine Learning (ML) and Deep Learning (DL). One company might say they use ML in their analytics, while another might claim to use DL to help enhance creativity. Which one is better or more powerful? Are either of these actually AI? Indeed, a single company may even use these words interchangeably, or use the overlap of definitions to their marketing advantage. Still others may be considering replacing an entire analytics department with DL specialists to take advantage of this new ‘AI Revolution’.

Don’t get swept up by the hype; let’s shine a light on what these terms really mean.

Teasing out the Differences between AI, ML and DL

These three terms⁠—Artificial Intelligence, Machine Learning, and Deep Learning—are critical to understand on their own, but also how they relate to each other; from a sales team explaining the services they provide, to the data scientists who must decide which of these model types to use. And while it is true that each of AI, ML, and DL have their own definitions, data requirements, level of complexity, transparency, and limitations—what that definition is and how each relate is entirely dependent on the context at which you look at them.

For example, what constitutes Machine Learning from a data acquisition perspective might look an awful lot like Deep Learning in that both require massive amounts of labeled data, while neither look at all similar in the context of the types of problems each can solve or even in the context that examines the skill sets that are required to get a specific model up and running.

For the purposes of this thought piece, the context we will be using will be the case of complexity—how the ability of each of Artificial Intelligence, Machine Learning, and Deep Learning simulate human intelligence and how they incrementally build on each other. This simulation of human intelligence, called simply machine intelligence, is measured by the machine’s ability to predict, classify, learn, plan, reason, and/or perceive.

The interlink between Artificial Intelligence, Machine Learning, and Deep Learning is an important one, and it is built on the context of increasing complexity. Due to the strong hierarchical relation between these terms, the graphic above demonstrates how we at Aunalytics have chosen to best to organize these ideas. Artificial Intelligence is the first of the three terms as historically it originated first, as well as the fact that it is the overarching term that covers all work within the field of machine intelligence. AI, as we use it, can be best described in two ways. The most general case definition of Artificial Intelligence is any technique that enables machines to mimic human intelligence.

Indeed, it may seem that any number of things computers are capable of today could be seen as an AI, although the focus here is not the ability to do math or maintain an operating system⁠—these are not ‘intelligent’ enough. Rather, we are considering such application like game AI, assistive programs like Microsoft’s ‘Clippy’, and expert systems which must predict useful material or actions, classify tasks and use cases, or perceive user and environmental behaviors to drive some action. In short, they display machine intelligence.

The key here is that all of these things perform an activity that we might attribute with human intelligence⁠—moving a bar to follow a perceived ball in the classic video game Pong, classifying that you are writing what looks to be a letter and then provide a useful template, or predict an answer for you based on your current problems. In each scenario, the AI is provided some sort of input and must respond with some form of dynamic response based on that input.

Glossary


Artificial Intelligence (AI): Any technique that enables machines to mimic human intelligence, or any rule-based application that simulates human intelligence.

Machine Learning (ML): A subset of AI that incorporates math and statistics in such a way that allows the application to learn from data.

Deep Learning (DL): A subset of ML that uses neural network to learn from unstructured or unlabeled data.

Feature: A measurable attribute of data, determined to be valuable in the learning process.

Neural Network: A set of algorithms inspired by neural connections in the human brain, consisting of thousands to millions of connected processing nodes.

Classification: Identifying to which category a given data point belongs.

Graphics Processing Units (GPUs): Originally designed for graphics processing and output, GPUs are processing components that are capable of performing many operations at once, in parallel, allowing them to perform the more complicated processing tasks necessary for Deep Learning (which was not possible with traditional CPUs).

Reinforcement Learning: A form of Machine Learning where an agent learns to take actions in a well-defined environment to maximize some notion of cumulative reward.

Sampling: Within the context of AI/ML, sampling refers to the act of selecting or generating data points with the objective of improving a downstream algorithm.

Artificial Intelligence: Machines Simulating Human Intelligence

These kinds of activities are all rule-driven, a distinction that leads to our second, more application based definition of AI: any rule-based application that simulates human intelligence. Rule-based activities possess a very limited ability to learn, opting instead to simply execute a predetermined routine given the same input. The easy Pong AI will always execute the rule provided⁠—to follow the ball – and no matter how long it plays it will only be able to play at an easy level. Clippy will always show up on your screen when it thinks that you are writing a letter, no matter how many letters you write or how annoyed you may get. This outright inability to learn leaves much to be desired to reach the bar of what we would consider human-level intelligence.

Machine Learning: Learning from the Data

This brings us to machine learning. Machine learning is a subset of AI that incorporates math and statistics in such a way that allows the application to learn from data. Machine Learning, then, would be primarily considered a data-driven form of Artificial Intelligence, although rule-driven material can still be applied in concert here where appropriate. Again, the key differentiator is that the algorithms used to build a Machine learning model are not hardcoded to yield any particular output behavior. Rather, Machine Learning models are coded such that they are able to ingest data with labels⁠—e.g. this entry refers to flower A, that entry refers to flower B⁠—and then use statistical methods to find relationships within that data in dimensions higher than would be possible for a human to conceptualize. These discovered relationships are key as they represent the actual ‘learning’ in machine learning. Therefore it is the data, not the code, where the desired intelligence is encoded.

Because of this ability to learn from a set of data, generalized models can be made that do great for certain tasks, instead of needing to hardcode a unique AI for each use-case. Common use cases for Machine Learning models include classification tasks, where a Machine Learning model is asked to separate different examples of data into groups based on some learned features. Examples here are such things like decision trees, which learn and show how best to branch features so that you arrive at a homogenous group (all flower A, or all Churning customer). Another common case for Machine Learning is clustering, where an algorithm is not provided labeled data to train on, but rather is given a massive set of data and asked to find what entries are more alike to one another.

In both of these applications there is not only the opportunity for learning, but continual learning⁠—something that hardcoded, rule-based AI simply cannot do effectively. As more data is collected, there is a growing opportunity to retrain the Machine Learning model and thus yield a more robust form of imitated intelligence. Much of modern business intelligence is built on this style of artificial intelligence given the massive amount of data that businesses now posses.

Limitations of Machine Learning

This is not to say that machine learning is the pinnacle of AI, as there are some severe limitations. The largest limitation to this approach is that we, as humans, must painstakingly craft the datasets used to train machine learning models. While there are many generalized models to choose from, they require labeled data and handcrafted ‘features’—categories of data that are determined to be valuable in the learning process. Many datasets already contain useful data, but in some domains this is much less so. Imagine, for example, you wish to build a machine learning model that can intelligently classify cats from cars. Well, perhaps you pull out the texture of fur and the sheen of cars—but this is a very difficult thing to do, and it is made even harder when one considers that the solution of this model should be general enough to apply to all cats and cars, in any environment or position. Sphynx cats don’t have fur, and some older cars have lost their sheen. Even in simpler, non-image cases, the trouble and time spent constructing these datasets can in some cases cost more than the good they can accomplish.

Crafting these feature-rich, labeled datasets is only one of the limitations. Certain data types, like the case with images we already have described, are simply too dimensionally complex to adequately model with machine learning. Indeed, processing images, audio, and video all suffer from this, a reminder that while these forms of AI are powerful, they are not the ultimate solution to every use case. Indeed, there are other use cases, like natural language processing (NLP) where the goal is to understand unstructured text data as well as a human can, where a machine learning model can be constructed—although it should be acknowledged that there exist more powerful approaches that can more accurately model the contextual relations that exist within spoken language.

Deep Learning: A More Powerful Approach Utilizing Neural Networks

We call this more powerful approach ‘Deep Learning’. Deep Learning is a subset of Machine Learning in that it is data-driven modeling, although Deep Learning also adds the concept of neural networks to the mix. Neural networks sound like science fiction and indeed feature prominently in such work, although the concept of neural networks have been around for quite some time. They were first imagined in the field of psychology in the 1940’s around the hypothesis of neural plasticity, and migrated a time later to the field of computer science in 1948 around Turing’s B-type machines. Research around them stagnated, however, due to conceptual gaps and a lack of powerful hardware.

Modern forms of these networks, having bridged those conceptual and hardware gaps, are able to take on the insane level of dimensionality that data-driven tasks demand by simulating, at a naive level, the network-like structure of neurons within a living brain. Inside these artificial networks are hundreds of small nodes that can take in and process a discrete amount of the total data provided, and then pass its output of that interaction onto another layer of neurons. With each successive layer, the connections of the network begin to more accurately model the inherent variability present in the dataset, and thus are able to deliver huge improvements in areas of study previously thought to be beyond the ability of data modeling. With such amazing ability and such a long history, it is important to reiterate that neural networks, and thus Deep Learning, have only become relevant recently due to the availability of cheap, high volume computational power required and the bridging of conceptual gaps.

When people are talking about AI, it is Deep Learning and its derivatives that are at the heart of the most exciting and smartest products. Deep Learning takes the best from Machine Learning and builds upon it, keeping useful abilities like continual learning and data-based modeling to generalize for hundreds of use cases, while adding support for new use cases like image and video classification, or novel data generation. A huge benefit from this impressive ability to learn high dimensional relationships is that we, as humans, do not need to spend hours painstakingly crafting unique features for a machine to digest. Instead of creating custom scripts to extract the presence of fur of a cat, or a shine on a car, we simply provide the Deep Learning models the images of each class we wish to classify. From there, the artificial neurons begin to process the image and learn for itself the features most important to classify the training data. This alone frees up hundreds if not thousands of hours of development and resource time for complex tasks like image and video classification, and yields significantly more accurate results (than other AI approaches).

One of the more exciting possibilities that Deep Learning brings is the capability to learn the gradients of variability in a given dataset. This provides the unique ability to sample along that newly learned function to pull out a new, never-before-seen datapoint that matches the context of the original dataset. NVidia has done some amazing work that demonstrates this, as seen below, using a type of Deep Learning called Generative Adversarial Networks (GANs) which when provided thousands of images of human faces can then sample against the learned feature distribution and by doing so pull out a new human face, one that does not exist in reality, to a startling level of canniness.

Deep Learning Limitations

Like its complexity-predecesor Machine Learning, Deep Learning has its share of drawbacks. For one, Deep Learning yields results in an opaque way due to its methodology, an attribute known as ‘black box modeling’. In other words, the explainability of the model and why it classifies data as it does is not readily apparent. The same functionality that allows Deep Learning so much control in determining its own features is the same functionality that obscures what the model determines as ‘important’. This means that we cannot say why a general Deep Learning model classifies an image as a cat instead of a car—all we can say is that there must be some statistical commonalities within the training set of cats that differs significantly enough from that of the car dataset—and while that is a lot of words, it unfortunately does not give us a lot of actionable information. We cannot say, for example, that because an individual makes above a certain amount of money that they become more likely to repay a loan. This is where Machine Learning techniques, although more limited in their scope, outshine their more conceptually and computationally complex siblings as ML models can and typically do contain this level of information. Especially as DL models become more depended on in fields like self-driving vehicles, this ability to explain decisions will become critical to garner trust in these Artificial Intelligences.

Another large drawback to Deep Learning is the sheer size of the computational workload that it commands. Because these models simulate, even at only a basic degree, the connections present in a human brain, the volume of calculations to propagate information through that network in a time scale that is feasible requires special hardware to complete. This hardware, in the form of Graphics Processing Units (GPUs), are a huge resource cost for any up-and-coming organization digging into Deep Learning. The power of Deep Learning to learn its own features may offset the initial capital expenditure for the required hardware, but even then it is the technical expertise required to integrate GPUs into any technology stack that is still more often than not the true pain point in the whole acquisition, and can be the straw that breaks the camel’s back. Even with such a large prerequisite, the power and flexibility of Deep Learning for a well-structured problem cannot be denied.

Looking Forward

As the technology continues to grow, so too will the organizing ontology we submit today. One such example will be with the rise of what is known as reinforcement learning, a subset of Deep Learning and AI (specific) that learns not necessarily from data alone, but from a combination of data and some well-defined environment. Such technologies take the best of data-driven and rule-driven modeling to become self-training, enabling cheaper data annotation due to a reduction in initial training data required. With these improvements and discoveries, it quickly becomes difficult to predict too far into the future for what may be mainstream next.

The outline of Artificial Intelligence, Machine Learning, and Deep Learning presented here will remain relevant for some time to come. With a larger volume of data every day, and the velocity of data creation increasing with mass adoption of sensors and the mainstream support of the Internet of Things, data-driven modeling will continue to be a requirement for businesses that wish to remain relevant, and important for consumers to be aware of how all this data is actually being used. All of this in the goal of de-mystifying AI, and pulling back the curtain on these models that have drummed up so much public trepidation. Now that the curtain has been pulled back on the fascinating forms of AI available, we can only hope that the magic of mystery has been replaced with the magic of opportunity. Each of AI, ML, and DL has a place in any organization that has the data and problem statements to chew through it, and in return for the effort, unparalleled opportunity to grow and better tailor themselves for their given customer base.

Special thanks to Tyler Danner for compiling this overview. 


Aunalytics logo

Companies Merge into the New Aunalytics

FOR IMMEDIATE RELEASE

Leading analytics & IT cloud providers unify to align with direction of the future.

SOUTH BEND, IND. (October 1, 2019) – Aunalytics, a leader in data and analytics services for enterprise businesses, announced today that it has unified four entities into the new Aunalytics, adding cloud, managed, and data center services to its offerings.

The newly unified Aunalytics positions the company as a unique leader in the IT market, offering a new category of software and services. Aunalytics is creating a single ecosystem that offers a full-featured, end-to-end cloud platform, capable of everything from traditional cloud hosting, to analytics and artificial intelligence.

This move follows significant growth in Aunalytics’ Midwest footprint as the only provider of a cloud platform with both analytics and production capabilities. The expansion, driven in part by acquiring the cloud infrastructure assets of MicroIntegration in South Bend, IN and Secant in Kalamazoo, MI, has resulted in strong momentum for Aunalytics. Today, the company employs over 180 team members – representing 56 universities, six countries, four branches of the U.S. Armed Forces, and has attracted talent from 11 states specifically to work at Aunalytics.

Secant is now Aunalytics

“The Secant name served us very well for many years, but as we continue to grow across the region and around the country, it is important to evolve the brand to better reflect our people and purpose,” said Steve Burdick, VP Sales, Cloud & Managed Services. “From the most foundational IT needs, to the most challenging AI initiatives, we have an expert team that can help manage every step of a client’s digital journey.”

MicroIntegration is now Aunalytics

“The change of our name represents the merger of multiple companies with unique technology capabilities that when joined together provide a world-class cloud platform with managed services to support virtually any technology our clients need,” said Terry Gour, President & COO, Cloud & Managed Services.

Data Realty is now Aunalytics

“Data Realty’s Northern Indiana data center was one of the first buildings in South Bend’s technology park, Ignition Park, and we’re excited, again, be part of the largest company at the tech park, as Aunalytics,” said Rich Carlton, President & Data Services Lead. “With almost 200 team members and multiple locations across two states, we have more top talent that can manage every step from hosting to AI.”

The Aunalytics Name

The mathematical symbol (U) means union. The letter ‘u” added to “analytics” symbolizes the belief that analytics is not just about data or software-as-a-service. It is about a union between technology and human intelligence, between digital partner and client, between cloud and artificial intelligence. Those that can master these unions, are the companies that will truly thrive and remain competitive.

“We’re living in a data-rich world that is only getting richer. But harnessing that data is extraordinarily difficult for most companies. Businesses need a partner that can provide not just the right systems and software tools, but the people and judgement to implement them strategically,” says Carlton. “They need a catalyst to help them on their journey to digital transformation. Aunalytics is that catalyst.”

For additional information, contact:
Aunalytics
574.344.0881

About Aunalytics

Aunalytics brings together the best of human intelligence and leading digital technologies to transform business challenges into flexible, scalable solutions, measured by definable business outcomes. Through a seamless integration of tailored IT initiatives, secure cloud infrastructure, and big data, analytics and AI solutions, Aunalytics is a catalyst for human and organizational thriving.

###


Aunsight End-to-End Data Platform

4 Ways Disparate Data Sets Are Holding You Back

As an enterprise with a lot of different sectors and moving parts, having disparate, siloed data is hard to avoid. After all, the marketing department may deal with certain information while the IT team works with other data. The details the finance department leverages aren’t the same as what’s used by HR, and so on. However, when this information exists in separate silos and never fully comes together, it could be holding your organization back considerably, particularly when it comes to your big data initiatives.

Today, we’ll look at just a few of the ways disparate data sets could be a problem for today’s companies, and how your business can help address this prevalent problem.

1) A world of enterprise apps

One of the biggest sources of disparate data is the range of business applications employee users leverage. While these activities may take place under the watchful eye of the IT team, each application will contain information unique to that platform and if this data isn’t brought together at some point, it can create skewed analytics results.

According to Cyfe, the average small business utilizes just over 14 different applications. This number jumps to 500 when examining large enterprises.

“[T]he more apps your organization uses the harder it is to make data-driven decisions,” Cyfe noted in a blog post. “Why? Because keeping a pulse on your business’ sales, marketing, finances, web analytics, customer service, internal R&D, IT, and more as isolated sources of data never gives you a complete picture. In other words, big data doesn’t lead to big insights if you can’t bring it together.”

2) Stuck in the information-gathering phase

It’s not only the location of data that can cause an issue – the sheer volume of information can also create significant challenges, particularly when an organization is working to gather all of that information in a single place.

“It can take considerable time to bring this information together without the right resources.”

Informatica pointed out that that as much as 80 percent of an analytics initiative involves the actual collection of information in order to establish a bigger, better picture for analysis. However, when a large number of details are located in several different locations, it can take considerable time to bring this information together without the right resources. What’s more, as the company is working to pull data from different sources, new, more relevant information is being created that will further impact analysis.

In this type of environment, it’s easy to get stuck in the gathering phase, where data is constantly being collected, while the team doesn’t move on to the analysis part of the initiative as quickly as they should.

3) Fear of missing out: Reducing repetition

This leads us to the next issue: fear of missing out. Because big data is constantly being created and changing so quickly, businesses may hesitate to analyze and leverage the insights due to a fear of missing out on the next piece of data that is just coming to light.

Furthermore, Informatica noted that when data isn’t organized and kept in several different locations, it can cause problems on not just one, but a number of analysis initiatives, as employees will have to repeatedly pull these details, wasting time and effort.

“The key to minimizing repetitive work is finding a way to easily reuse your logic on the next data set, rather than starting from square one each time,” Informatica pointed out.

This is only possible, however, with the right big data platform that can help gather information from all disparate sources in the shortest time possible. In this way, businesses can eliminate costly repetitive processes while still ensuring that nothing falls through the cracks as information is gathered for analysis.

4) Missing information: Is it possible to know what isn’t there?

Siloed data can also lead to gaps in knowledge, which can considerably impact analysis results. For instance, a company seeking to learn more about their client base may include a range of different data sources, but may overlook details in the customer relationship management solution, causing them to miss important insights about brand interactions. While this is an extreme example, it helps illustrate the pitfalls of incomplete data sets.

Addressing disparate data: Partnering for success

These are challenges that can affect businesses in every sector, but can be easily and expertly addressed when companies partner with a leading big data solution provider like Aunalytics. Aunalytics has everything your enterprise needs to fully support its big data initiatives. Our unique, best-of-breed technology, Aunsight, ensures that information is gathered from all disparate sources, and that analysis is always as complete as possible. We help you collect and integrate your data so that workflows and unique algorithms can be established, leading to the most high-quality, actionable insights.


Customer Intelligence

What is Little Data and what does it mean for my big data initiatives?

Big data has been the buzz of the business world for years now, with businesses across every industrial sector gathering and analyzing information in an effort to leverage the resulting actionable insights. For the past few years, “big” has been the name of the game, with organizations working to indiscriminately collect as many details in a whole host of different areas.

Now, however, a new strategy is coming to light: little data. But what, exactly, is little data? How is it related to big data? And how can this approach make all the difference for your business?

Little data: A definition

Little data comes in almost exact contrast to big data, but can also be complementary – and very important – to supporting a big data strategy.

According to TechTarget, little, or small, data are more selective pieces of information that relate to a certain topic or can help answer a more specific pain point.

“Little data comes in almost exact contrast to big data.”

“Small data is data in a volume and format that makes it accessible, informative and actionable,” TechTarget noted.

The Small Data Group further explains that little data looks to “connect people with timely, meaningful insights (derived from big data and/or ‘local’ sources) organized and packaged – often visually – to be accessible, understandable, and actionable for everyday tasks.”

A step further: What’s the difference?

The key differences here are demonstrated by big data’s defining characteristics. Big data, as opposed to little data, is often defined by what are known as the three V’s, including volume, variety and velocity. The first two are particularly important here. Whereas big data usually comes in the form of large volumes of unstructured or structured information from a range of different sources, little data simply doesn’t cover as much ground.

Little data, on the other hand, comes from more precise sources and will include a smaller amount of information in order to address a previously defined problem or question. Where big data is vastly collected and then analyzed for insights that might not have been accessible previously, little data is gathered and analyzed in a more specific way.

Forbes contributor Bernard Marr noted that little data typically includes more traditional key performance metrics, as opposed to large, indiscriminate datasets.

“Data, on its own, is practically useless. It’s just a huge set of numbers with no context,” Marr wrote. “Its value is only realized when it is used in conjunction with KPIs to deliver insights that improve decision-making and improve performance. The KPIs are the measure of performance, so without them, anything gleaned from big data is simply knowledge without action.”

Little data and big data: Working in tandem

However, this is not to say that little and big data cannot work together. In fact, little data can help bring additional insight and meaning to the analysis results of big data.

For instance, a big data analysis initiative could show certain patterns and facts about a business’s customers. Little data can then bring even more to the table, helping to answer more specific questions according to key performance indicators.

These KPIs can also be utilized to measure an organization’s ability to put its big data insights to work for the business.

“For example, a retail company could use a big data initiative to develop promotional strategies based on customer preferences, trends and customized offers,” Marr noted. “But without traditional KPIs such as revenue growth, profit margin, customer satisfaction, customer loyalty or market share, the company won’t be able to tell if the promotional strategies actually worked.”

Little data in action

Little data can also be more personal in nature, offering knowledge and actionable insights for a company’s consumers. Nowhere is this more prevalent than in the fitness industry, particularly with the popularity of wearable fitness monitors that sync to a user’s mobile device.

Harvard Business Review contributor Mark Bonchek noted that oftentimes, little data pertains to each consumer as an individual, and that these details are what companies seek out to utilize as part of their big data strategies.

“Big data is controlled by organizations, while little data is controlled by individuals,” Bonchek wrote. “Companies grant permission for individuals to access big data, while individuals grant permission to organizations to access little data.”

Returning to the wearable fitness device example, little data would comprise the informational insights that are delivered by the tracking module, including distance traveled, weight changes, calorie intake, etc. A big data initiative related to these findings would require that the consumers utilizing these fitness trackers grant access to this information. From here, an organization could analyze a whole host of little data sources to offer a more global, overarching look at users’ habits.

Leveraging big and little data

If your company is interested in harnessing the power of little data as part of a big data strategy, it’s imperative to have a partner that can help fill in any gaps. Aunalytics has access to a variety of data sets, including those that can provide the right insights for your business.


Aunsight End-to-End Data Platform

Understanding Analytics Part 2: Top External Sources of Big Data

Big data analysis is one of the most powerful strategies today’s corporations have in their repertoire. Gathering and analyzing relevant information to better understand trends and glean other insights can offer a nearly endless number of benefits for companies as they look to offer better customer services and enhance their own internal processes.

Before that analysis can result in impactful insights, though, a company must first collect the information they’ll leverage as part of the initiative. Different datasets will provide different results, and there are a range of sources where these details can come from.

In the first part of this series, we examined a few of the top internal sources of data, including transactional information, CRM details, business applications and other company-owned assets. These sources are already under the business’s control, and are therefore some of the first places data scientists look as part of their information gathering efforts.

Sometimes, however, this data isn’t enough. Whether the organization is seeking to answer broader questions about the industry, or better understand potential customers, these initiatives may require the analytics team to look outside the company’s own data sources.

When this takes place, it’s critical that the enterprise understands the most valuable places to gather data that will best benefit its current processes. Today, we’ll take a look at the top sources of external data, including public information that isn’t owned by the company.

Social media: Connecting with your customers

One of the most robust external big data sources is social media channels, including Facebook, Instagram and Twitter. These sites have become incredibly popular – not only for individual customers, but for corporations as well. Through social media profiles, businesses can put an ear to the ground, so to speak, and get a better understanding of their current and potential customers.

And with so many users flocking to these platforms, the potential for big data is significant:

  • Facebook had more than 1.5 billion active users as of April, 2016.
  • Twitter had 320 million active users in the first quarter of this year.
  • Instagram had 400 million active users in early 2016.
  • Other platforms aren’t far behind: Snapchat boasts more than 200 million users, Pinterest and LinkedIn were tied at 100 million active users.

In addition, helpful sources like Facebook Graph help companies make the best use of this information, aggregating a range of details that users share on the platform each day.

“Social media data can be incredibly telling.”

Overall, social media data can be incredibly telling, offering insights into both positive and negative brand feedback, as well as trends, activity patterns and customer preferences. For instance, if a company notices that a large number of social media users are seeking a specific type of product, the business can move to corner the market and address these needs – all thanks to social media big data insights.

Public government data

While social media information is no doubt powerful, this isn’t the only external data source companies should pay attention to. The federal government also provides several helpful informational sources that help today’s enterprises get a better picture of the public. According to SmartData Collective, few of the best places to look here include:

  • Data.gov: This site was recently set up by federal authorities as part of the U.S. government’s promise to make as much data as possible available. Best of all, these details are free, and accessible online. Here, companies will find a wealth of data, including information related to consumers, agriculture, education, manufacturing, public safety and much more.
  • Data.gov.uk: Businesses looking for a more global picture can look to this site, where the U.K. government has amassed an incredible amount of metadata dating back to 1950.
  • The U.S. Census Bureau: The Census Bureau has also made a range of data available online, covering areas such as overall population, geographical information and details related to regional education.
  • CIA World Factbook: The Central Intelligence Agency no doubt has huge repositories of information at its disposal, and has made select information available via its online Factbook. This resource provides data on global population, government, military, infrastructure, economy and history. Best of all, it covers not only the U.S., but 266 other countries as well.
  • Healthdata.gov: Health care information can also be incredibly powerful for companies in that industry, as well as those operation in other sectors. This site provides more than 100 years of U.S. health care information, including datasets about Medicare, population statistics and epidemiology.

Google: The data king

Google has also provided a few key, publicly available data sources. As one of the biggest search engines in the world, Google has a wealth of information about search terms, trends and other online activity. Google Trends is one of the best sources here, providing statistical information on search volumes for nearly any term – and these datasets stretch back to nearly the dawn of the internet.

Other powerful sources provided by Google including Google Finance, which includes 40 years of stock market data that is continually updated in real time. In addition, Google Books Ngrams allows companies to search and analyze the text of millions of books Google has in its repository.

The right data: Answering the big questions

Overall, in order for businesses to answer the big questions guiding their initiatives, they must have access to the right data. Public, external sources can help significantly, as can a partnership with an expert in the big data field.

Aunalytics can not only help today’s enterprises gather and analyze their available information, but can also help fill any gaps that might hold back the success of an initiative. Our scalable big data solutions ensure that your organization has everything it needs to reach the valuable insights that will make all the difference.