Review: Tabular

Posted on by  admin
Review: Tabular 3,9/5 5772 reviews

Build agile and responsive business intelligence solutions Create a semantic model and analyze data using the tabular model in SQL Server 2016 Analysis Services to create corporate-level business intelligence (BI) solutions. Led by two BI experts, you will learn how to build, deploy, and query a tabular model by following detailed examples and best practices. This hands-on book shows you how to use the tabular model’s in-memory database to perform rapid analytics—whether you are new to Analysis Services or already familiar with its multidimensional model.

Many companies have adopted SQL Server Analysis Services Tabular as the analytical engine for their product or service. Some have created a single large database deployed manually and queried by their front end. Others provide a multi-tenant service—creating databases on demand, and then accessing them through standard clients.

Reviews Tabula Rasa

Other scenarios use a mix of these two approaches. The point of view is unlike that of creating a “classic” Business Intelligence solution. When the solution serves other software instead of a human user, the challenges are completely different. This article provides several reasons why Tabular could be the right choice for the analytical engine embedded in a service or an application. You can download this article in PDF format at the end of this page. Introduction In these early years of SQL Server Analysis Services Tabular (SSAS Tabular, or Tabular hereinafter) adoption, SQLBI has helped several customers in their first implementation using the new xVelocity in-memory engine and the DAX language.

Existing customers of SSAS Multidimensional (Multidimensional hereinafter) have many reasons for not adopting Tabular: missing features, skills shortage, lack of tools, legacy (!) OLAP ecosystems. This is expected and very natural, as Multidimensional is complementary to Tabular and is not going to disappear.

Thus, users have adopted Tabular mainly for new projects—especially in companies that had not used Multidimensional before. However, in this adoption process, something surprising happened. My guess is that Microsoft was surprised by it as well. I have seen many companies adopt Tabular as the analytical engine for their product or service without actually creating a “BI project” in the canonical way (gather requirements, design a prototype, get feedback, improve the model, and loop until it is done).

Instead, they integrated Tabular features into their existing software as a back-end server for their analytical needs. In other words, users do not even know that Tabular is powering their reports. The usage may vary: some companies created a single large database deployed manually and queried by their front end (this made it necessary to write their own DAX query builder).

Others provide a multi-tenant service, creating databases on demand, and then offering access them through standard clients (such as Excel, Reporting Services, and other third-party tools). In other scenarios that we have helped to build, companies use a mix of both approaches. Until today, we have never been proactive in promoting Tabular for such scenarios. Companies contacted us when they were looking for a solution to a specific problem—specifically, looking for an engine for their product or service with good performance and low maintenance cost. These companies usually considered Multidimensional, but chose not to adopt it for two main reasons: MDX complexity (from the developers’ point of view) and performance.

Not to say that Multidimensional is slow, but creating an efficient Multidimensional database requires a major effort: it is hard to automate and it is complex to optimize when an application automatically generates the model. Other common issues are DISTINCT COUNT measures, many-to-many relationships, and leaf-level calculations: all of these are features where obtaining good performance is a challenge. Often, when customers approached us asking for an engine, we developed both Multidimensional and Tabular proof of concepts. In nearly 90% of these cases, the customer chose Tabular because of its effectiveness and simplicity.

One of these customers is inContact, for which we provided a full case study in this paper: A few times, we had requests for proof of concept of a Tabular solution by companies that were already using Multidimensional (and finding some limitations because of their requirements) or that had discarded Multidimensional after building a prototype. They mainly asked whether the new analytical engine they saw at a conference or read about in articles and blogs (because Microsoft never really pushed Tabular adoption) was good enough for their requirements. Our discovery was that yes, most of the time it was much more than “good enough.” In this article, we would like to share in more detail some of the reasons why Tabular could be the right choice for the analytical engine embedded in a service or application.

The point of view is unlike that of creating a “classic” BI solution. When the solution serves other software instead of a human user, the challenges are completely different. We will try to help you make an initial evaluation, highlighting that, before taking any decision, you need to create a proof of concept for your specific scenario (by the way, SQLBI can help you do that, but—trust me—this is the only commercial you will find here). Choosing an Embedded Analytical Engine Among many other products, Analysis Services is a possible choice for an embedded analytical engine in an application or service. Moreover, it provides two different engines and model types: Tabular and Multidimensional. To make your choice, you need to perform a deep analysis of the requirements for each specific scenario.

In this section, you can find a description of the general requirements to consider and the reasons why Tabular is the best choice in many cases. Requirements Software products and services that offer analytical features typically need some of the following elements:.

A semantic model that allows a generic client to browse data and dynamically create queries. High scalability for concurrent users’ workload. Top query performance when aggregating huge volumes of data. A query language that simplifies the automated generation of queries. The capability to script and automate several database activities (model definition, provisioning, backup, and maintenance).

Low total cost of ownership (TCO) to reduce the costs over time of their solution. Most applications use a relational database as a repository for their data. Often this becomes the data source for all of the reports and queries. Normally this is not a brilliant idea, because the application database is seldom designed to respond quickly to analytical queries. Nevertheless, there are some aspects (skills, licensing costs, hardware, and maintenance) for which using the same relational engine looks like a good choice. Here are some of the most important aspects to consider when deciding which database to use for your application analytics:. Relational databases do not have a semantic model.

The semantic model is needed by query builders (either Microsoft Excel or the application’s query builder) to let users interact with the database. Without a semantic model, you may need to build an external data model to drive the client application in data navigation within the relational model. Serving many concurrent users requires a good cache policy. Usually, relational databases do not cache the intermediate results of a query; they mainly use the cache just to lower the number of I/O operations. Once data is in memory, a high concurrent workload might create a bottleneck in RAM access. Microsoft SQL Server’s cache can have a scalability issue with high concurrent workload, even when using column store indexes.

Relational databases offer index technologies that improve performance when a query aggregates data. For example, column store indexes in Microsoft SQL Server might be able to satisfy such a requirement. In an application database, however, adding indexes might create performance issues for the application, because indexes need to be updated whenever the underlying data changes. Thus, adding indexes until the analytics are fast enough is a dangerous technique to use.

SQL is a flexible and general-purpose query language. There are tons of existing libraries to generate queries automatically (for example,.NET Framework uses LINQ). However, the SQL language can be verbose and hard to optimize for the query engine, especially when an object-relational mapping (ORM) library is used to generate the SQL query. Standard SQL includes a rich set of statements to generate the data model (Data Definition Language, DDL). Product-specific extensions to SQL are available to manage a SQL Server database lifecycle.

Several database products, such as Microsoft SQL Server, also include custom APIs for other languages (e.g., C#) to automate deployment, backup, and maintenance. Thus, a product like Microsoft SQL Server is already compliant with scriptability/automation requirements. In order to obtain good performance for queries that aggregate huge volumes of data, a relational database uses additional structures that duplicate data in a format that makes query execution faster (such as indexes). These structures require maintenance over time, which might include additional operations (such as periodic reviews of the index schema because distribution of data changes over time). This results in a “hidden” additional cost of ownership. This is a recap of these considerations in a small matrix: Feature SQL Server Semantic model N/A High scalability Good, but SSAS is better Query performance for huge volumes Fair, but SSAS is far better Query language Good, but verbose Automated handling of the database Good Total cost of ownership Good, but might have hidden costs As you can see, sometimes there is the need for a different analytical engine than the standard relational database used to store an application’s data.

Why Analysis Services Tabular After considering the requirements for an analytical engine, Analysis Services is usually a good candidate, even just for licensing reasons: many SQL Server licenses also include Analysis Services, if installed on the same server. However, the licensing cost is only part of the equation. Analysis Services also offers a semantic model with a cache system that provides performance and scalability. It has two model types, with different internal engines: Tabular and Multidimensional.

Analysis Services 2012 introduced the Tabular model, whereas the Multidimensional model has been around since Analysis Services 2005. Other than using different engines, the main differences between Tabular and Multidimensional are the languages and tools used to define the models. A complete description of the differences is outside of the scope of this article; more details are available in the white paper published by Microsoft. Several companies that we have worked with have adopted SQL Server Analysis Services Tabular as the analytical engine for their products or solutions. The reasons why Tabular was preferred to Multidimensional are performance, maintenance, cost of ownership, and flexibility in data model design. This flexibility is very important considering the requirements to automate the provisioning of the data model.

One of the limits of Tabular adoption is the licensing cost, because it requires a Business Intelligence or Enterprise edition of SQL Server. The cheaper Standard edition includes Multidimensional but not Tabular. For this reason, to reduce the final price of their products, many small independent software vendors (ISVs) choose Multidimensional over Tabular, even if Tabular could be a better choice from a technical point of view. The following section presents the experiences of several companies who chose Tabular as the analytical engine for their products or solutions. Tabular Experiences from the Field In the previous section, we explained why choosing an Analysis Services data model can be a good option for software analytics. This section explores the issues of an embedded analytical engine in more detail, sharing the experience of early adopters of Analysis Services Tabular and some of the reasons why they made this choice. Automated Deployment In multi-tenant scenarios, the system provides a database for each customer.

Moreover, if the customer has the option of customizing entities (for example, adding custom columns or calculations), you end up having a different schema for each database. Classic relational database solutions used for this requirement, such as entity-attribute-value (EAV) models or XML columns, are not compatible with Tabular and the requirements of a semantic model, which must clearly identify each attribute. As of SQL Server 2014, Tabular does not have a real data definition language (DDL).

An application can deploy a database either by creating XMLA statements or by using the library. With it, you can use C# and any other.NET language to automate the deployment operation. A DDL for Tabular would be very welcome, because it would make it easier to compare the differences between databases in different tenants by just looking at the differences between two human-readable text files. Unfortunately, as of now, this is not feasible, but is a useful tool that offers a similar feature. Dynamic Query Generation Many applications have embedded features that allow users to create custom reports. This requires the dynamic generation of queries sent to the database engine.

SQL is the language of choice for a relational database, whereas Tabular offers DAX and MDX. The most common choice for Tabular is DAX. It is more similar to SQL than MDX, and it is a functional language, which makes it easier to compose and encapsulate operations in sub-queries. The simplifies the generation of a query tree and the production of the DAX code in a.NET application. One of our customers created this library and then released it as an open source project on.

No Maintenance Required If an application needs to generate a SQL Server database schema automatically, then it faces a further complexity: creating indexes and tracking statistics to maintain good performance. With automated deployment, we expect that there is no database administrator (DBA) working on planning and implementing maintenance. Thus, the automation must include this activity too. A similar level of complexity would be present even with Multidimensional—at least for the definition and maintenance of aggregations that might change for different databases.

Tabular, on the other hand, does not have indexes or aggregations. The Tabular engine gets statistics on demand while running a query and requires no additional data structures. These internal queries are very fast and do not interfere with the overall query performance, thanks to the optimized in-memory storage engine. The important factors that determine system performance are the model design and the query structure. Different queries producing the same result might have different query plans with different response times, so the only possible optimization is using the best DAX constructs, because there are no indexes or external “helps” that can produce a different query from the same DAX query. Thus, there is no need to plan manual or automatic maintenance of auxiliary structures used as performance optimization tools, resulting in a lower TCO for Tabular compared to SQL Server or Multidimensional.

Performance Every query in Tabular might perform a complete scan of the columns involved in the request. A small internal cache avoids running the same storage engine request multiple times within the same DAX query.

Even when there is a complete scan of a column, the execution time of a storage engine request is usually in the milliseconds range because of optimizations based mainly on data compression, which lower the amount of RAM read for each request. Thanks to these continuous scans, leaf-level calculations on data do not affect performance (unlike what happens in Multidimensional). Tabular always works at the most granular level and its design provides outstanding speed. Applications that benefit more from the Tabular engine have the following characteristics:. Extensive use of leaf-level calculation, such as simulation or other dynamic calculations based on external parameters of algorithms applied row by row on large tables. Distinct count calculations on any column in any table of the data model.

Calculations over many-to-many relationships (also combined with distinct count calculations). Free-form queries, where filtering and grouping can happen on any combination of columns and values for any query.

Review:

That said, there are two main points of attention for performance:. The data model design affects the compression, which affects query performance. For example, high cardinality columns should be avoided (as in the case of timestamp, not required for analysis) or split (as in the case of a datetime value, which works better if stored in two columns—one for the date and the other for the time). Quality of DAX code is key for both queries and measures. Different formulations of DAX queries and calculations might produce the same result, but with different speed. In fact, the query plan generated by the Tabular engine is very sensitive to the quality of the DAX code, because its query optimizer is not as mature and advanced as the SQL Server one.

Adopting best practices and generating DAX code in a smart way is essential to obtain the optimal level of performance. Being able to read DAX query plans and other events provided by the profiler is an important skill required to optimize measures and queries. Tabular usually provides good performance, but it is always important to plan tests and optimizations as part of the development process, because of the impact this could have on the database design. Since optimization does not depend on additional structures, it is important that you do not delegate this activity to a support team after model deployment. Instead, you should constantly check the impact on performance of each modeling decision. Bottlenecks In our experience, you can solve most of the performance issues of a Tabular model by rewriting the DAX expression of formulas and queries. Experience and education usually train developers so that this type of optimization is not necessary, once they learn to write DAX formulas in the right way at first draft.

With that said, two issues may require more complex optimizations based on the specific design of the data model:. High-cardinality columns. Complex formula engine calculations High-Cardinality Columns As a general rule of thumb, a high-cardinality column has more than a million unique values. High-cardinality columns affect performance of several operations in a query. In general, any scan of such a column tends to be slow because of its relatively low compression rate.

In particular, this usually affects distinct count calculations and relationships between tables based on high-density columns. But even a simple sum of high-cardinality columns is much slower than the same operation on a low-cardinality column for the same number of rows in a table. A distinct count calculation may be relatively slow compared to other calculations such as sum, average, min, and max. The release of greatly alleviated this issue (also documented in ). It is important to use a build of Analysis Services greater than or equal to for Analysis Services 2012 and greater than or equal to for Analysis Services 2014 (which has implemented this fix in ). High-cardinality columns can be still a performance issue when they define a relationship between two tables. In this case, low selective queries might show slower performance.

For example, consider a database that has sales transactions in a fact table related to a dimension with 10 million customers. Calculating sales for male customers will require filtering 5 million customers over the fact table, which may be slow. However, this type of issue does not affect queries that filter a low number of customers (for example, thousands of customers living in a particular zip code). Complex Formula Engine Calculations The DAX engine evaluates queries using an architecture that includes a formula engine and a storage engine. The storage engine (also known as VertiPaq or xVelocity in-memory analytical engine) performs simple calculations leveraging a scalable multi-threaded architecture.

The formula engine executes more complex operations (especially filters and calculations) in a single-threaded environment. When a DAX query requests a big effort of the formula engine, it can result in lower performance. When this happens, it is usually necessary to rewrite the DAX query to obtain a different (and faster) query plan by moving most of the computation to the storage engine or by reducing the computation complexity. This type of DAX optimization might be possible by using different DAX syntax or by adapting the data model to produce a better query plan. However, these performance issues can be hard to solve in a system that generates the data model and/or the DAX queries automatically.

The lack of some features in DAX (such as temporary tables, or set operations like union, except, and intersect) makes it difficult to express certain operations in an efficient way. Development Tools The development environment for a Tabular model is the SQL Server Data Tools for Business Intelligence. This environment requires the use of a user interface for creating each measure and calculated column in the data model. The entire model is a single XML file, which makes it hard to compare different versions of the same project.

Moreover, hosting the full project in a single file is a big obstacle if different programmers need to edit the same data model at the same time. Finally, editing DAX formulas in Visual Studio is not very productive and there is a need for improvement in future releases. The community has created several tools to improve productivity in Visual Studio and in testing/debugging Tabular models:.: a Visual Studio add-in that improves productivity when editing Tabular models.

Comments are closed.