Revolutionizing Data

Harnessing transformer architecture for tabular processing

A.I Hub
5 min readAug 13, 2024
Image owned by LinkedIn

In the vast ocean of AI advancements, transformers have made waves in natural language and vision tasks, but their potential doesn’t stop there. When applied to tabular data processing, these architectures are revolutionizing how we handle the most structured and fundamental forms of data. What was once the domain of traditional methods is now being transformed by the sheer power and flexibility of transformers. Get ready to explore a new era where these cutting edge models re-define tabular data analysis, unlocking insights and performance previously thought unattainable.

Table of Content

  • Introduction
  • System requirements
  • Tabular data representation using transformer
  • TAPAS architecture

Introduction

Image owned by Turbologo

In the modern era of data analytics and machine learning, we are not limited
to unstructured data types like text, images or audio. Structured or tabular
data holds significant value as well. The potential of transformers when

applied to structured data is immense and offers an intriguing area for
exploration. This section seeks to illustrate the application of transformer based architectures to the realm of tabular data.

We will delve into three such transformer architectures designed specifically
for structured data processing, Google’s TAPAS, Tab Transformer and FT
Transformer.

System Requirements

Setting Up Environment:

  1. Install Anaconda on the local machine.
  2. Create a virtual environment.
  3. Install necessary packages in the virtual environment.
  4. Configure and Start Jupyter Notebook.
  5. Connect Google Colab with your local runtime environment.

Installing Anaconda On Local System

  1. Go to the Anaconda download page.
  2. Download appropriate version for your computer.
  3. Follow the instructions provided by the installer.
  4. If the installer prompts you to add anaconda in the system’s PATH variable, please do it. This enables you to seamlessly use Anaconda’s features from the command line.
  5. Check if installation is successful by typing the following command in the terminal.
conda --version

Create a Virtual Environment

To create a virtual environment in Anaconda via the terminal, follow these steps.

  1. Open the terminal on your local machine.
  2. Type the following command and press Enter to create a new virtual environment. In the below code, the virtual environment name is torch_learn and the python version is 3.11.
conda create --name 
torch_learn
python=3.11

3. Once the environment has been created, activate it by typing the following command.

conda activate transformer_learn

4. Install the necessary Package in your environment. Following are requirements for section 2. Install based on each section.

pip3 install transformers
pip3 install datasets
pip3 install git+https://github.com/huggingface/diffusers
pip3 install accelerate
pip3 install ftfy
pip3 install tensorboard
pip3 install Jinja2
pip install bitsandbytes
pip install sentencePiece
pip install speechbrain

Tabular Data Representation Using Transformer

Image owned by Altassian

Following the substantial success of representing natural language through
transformer models, there has been considerable interest in applying transformer architectures to tabular data representation. Current research and
applications demonstrate a broad range of potential uses in this area which includes.

  1. Table-based fact checking — This application validates the veracity of
    textual inputs based on structured data serving as a fact checking
    table.
  2. Question-answering — This encompasses posing questions in free text
    format and retrieving specific cells from a table or aggregating information based on the query.
  3. Semantic parsing — This involves the conversion of free text into SQL
    queries, enabling direct interaction with databases.
  4. Table retrieval — This task involves searching for and retrieving the
    table that contains the answer to a specific query or requirement.
  5. Table metadata prediction — In this scenario, given the tabular data,
    the model predicts associated metadata.
  6. Table content population — This functionality allows for the prediction
    and filling in of corrupted or missing cells/rows in a table helping
    maintain data integrity.

TAPAS Architecture

Image owned by LinkedIn

The Google TAPAS, Table Parser2

model is built on top of the BERT model,
one of the transformer based models and uses the BERT tokenizer. Figure
1.1 shows the architecture of TAPAS. The model is designed to read tables

as a form of input in addition to text. Each table’s cell is a token sequence.
The input is a linear sequence of tokens that includes a (CLS) token,
question tokens (SEP) token and flatten table. Additionally, there are two

classification heads attached.

  1. Aggregation prediction
  2. Cell selection
Figure 1.1 - TAPAS architecture

There is additional positional encoding compared to BERT. Let us discuss
each of them.

  1. Token embedding — It is the token embedding information.
  2. Positional embedding — It is the same as BERT.
  3. Segment embedding: 0 for the question and 1 for table.

    Column embedding — It is the index of the column.
  4. Row embedding — It is the index of the row.
  5. Rank embedding — Rank embeddings are used to encode the order of
    cells in a row or column. Figure 1.2 depicts positional encoding.
Figure 1.2 - Positional encoding on TAPAS

Conclusion

Finally, we land up our journey through the evolution of transformer architecture for tabular data processing, it’s evident that we are standing on the brink of a new frontier in data analysis. From the foundational introduction of transformers to the meticulous system requirements and the innovative representation of tabular data, the landscape has been irrevocably transformed. The TAPAS architecture with its ground breaking ability to query and interpret structured data, encapsulates this transformation, pushing the boundaries of what is possible. This shift not only re-defines our approach to tabular data but also sets the stage for future innovations where the fusion of transformers and tabular data will drive unprecedented levels of insight and decision making. The future of data processing has arrived and it is powered by transformers.

--

--

A.I Hub
A.I Hub

Written by A.I Hub

We writes about Data Science | Software Development | Machine Learning | Artificial Intelligence | Ethical Hacking and much more. Unleash your potential with us

No responses yet