Protein Folding With AlphaFold2: Chapter Four

MoleculeAI
5 min readJul 15, 2023

--

In the last chapter, we have discussed the Evoformer block of AlphaFold2 [1] architecture with its neural network components. The self-attention helped to enrich the MSA representation as well as pairwise representations by simultaneously interchanging information across multiple layers. In this blog, we will discuss the final module of the network, which is the “structure” module.

What is the purpose of the “structure” module?

The main task is here to convert the feature representation of the target sequence of amino acids or residues into the coordinates of the constituting atoms in three dimensional space.

It takes three information as inputs (i) the pairwise representation of residues (ii) the MSA representation of the sequence as local frame information of each residues, and (iii) the 3D backbone structure — represented as independent rotations and translations for each residue with respect to the global frame. These rotations and translations represent the geometry of N-Cα-C atoms. The 3D backbone information helps to confine the sidechain location within the frame. The 3D backbone is also called the residue gas representation.

The module updates the MSA representation of the target sequence and the backbone of each residue. It also computes the torsion angles.

The Building Blocks of the “structure” module

The Structure module originally consists of 8 layers with shared weights, and each layer has three main components as shown in the figure 1.Each layer refines the single MSA representation and the backbone frame which helps finally to get all atom coordinates.

Figure 1: Diagram of the Structure module. [Image credit source]

Invariant Point Attention (IPA)

In this submodule, the geometry based attention is applied on each residue of the sequence. It augments each of the usual attention queries, keys and values with 3D points that are produced in the local frame of each residue such that the final value is invariant to global rotations and translations. This is achieved as the distance is considered while computing attention weights. Specifically, the L2-norm of a vector is invariant under rigid transformations, and global transformation cancels out.

The IPA module updates the abstract single representation using multiple “channels” of attention:

  1. It first computes the usual self-attention for the single representation

2. Then it converts the pair representation into a simple element-wise “pair bias”

3. Next, it converts the backbone frames into “distance affinities”, which can be thought of simply as representing distances.

4. Finally, it adds all of these to obtain a final set of attention weights, which are then combined with vector representations of each input type and summed for the final single representation update.

Now, why do we require the invariance property here?

If a shared rigid motion (consisting of proper rotation and translation but not reflection) is applied to all residues while keeping the embeddings fixed, the resulting update in the local frames (individual residue frames) will be the same. This implies that the updated structure will be transformed by the same shared rigid motion.

In other words, when a rigid motion is applied uniformly to all residues in the protein structure, the individual local frames of each residue will be updated consistently. This property is referred to as equivariance under rigid motions, meaning that the update rule of the structure module remains consistent regardless of the rigid motion applied.

This characteristic is beneficial because it ensures that the structure module’s predictions are not affected by overall rigid motions of the protein, such as translations or rotations of the entire structure. It allows the module to focus on capturing and refining local structural features without being influenced by global transformations.

Backbone Update

After applying transition to the MSA representation of IPA output, in this submodule, a linear network is set up to update the backbone by computing its rotation and translations in quaternion form. These updates within the local frame of each residue makes the overall attention and update block an equivariant operation on the residue gas.

Side chain and backbone torsion angles predictions

Now, to obtain all atom coordinates it only parameterizes the torsion angles of each residue. It defines a rigid group for every of the 3 backbone and 4 side chain torsion

Angles. It uses a shallow Resnet to get each torsion angle in 2D. The torsion angles that have been predicted are transformed into frames representing the rigid groups of atoms. Finally, all atoms coordinates are calculated using the local frame having both rotations and translations available, and the side chain torsion angles.

How do we optimize each of these steps?

AlphaFold2 incorporates a special structure loss called the frame aligned point error (FAPE), which is roughly computed by viewing every atom under a number of different frames for both the predicted and true protein structures, then taking the average of a simple L2 norm of their differences. Along with these it also computes the model confidence in terms of Predicted Local Distance Difference Test (pLDDT) and predicted Templated Modeling (pTM) scores.

The structure module refines the 3D protein structure by applying shared rigid motions and updating local frames, ensuring invariance to global reference frames. It utilizes auxiliary losses, FAPE loss, and predicts confidence scores (pLDDT) for training and evaluation purposes.

In conclusion, AlphaFold2 has demonstrated significant advancements in protein structure prediction, achieving remarkable accuracy and reliability. It outperformed previous methods in the CASP14 competition and provided high-quality structural predictions for a wide range of proteins, making substantial contributions to the field of structural biology. AlphaFold2’s success holds promise for various applications, including understanding protein functions, drug discovery, and advancing our knowledge of biological systems.

At Molecule AI, we’re creating cutting-edge approaches that harness the power of deep learning in the realm of protein design. To learn more, feel free to contact us at info@moleculeai.com.

References:

[1] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, T., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S., Ballard, A., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Stengger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P., & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589.

--

--

MoleculeAI

This page would let you know about the interesting developments in the field of Drug discovery to cure neurogenerative diseases, using artificial intelligence.