20 Self-Attention Interview Questions and Answers (2023)

Self-attention is a powerful tool for understanding natural language processing and machine learning. Self-attention is a type of neural network that can be used to identify patterns in data and can be used for tasks such as text classification, language understanding, and image recognition. As the popularity of self-attention increases, employers are beginning to ask questions about it during job interviews. In this article, we discuss the most common self-attention interview questions and provide tips on how to answer them.

Here are 20 commonly asked Self-Attention interview questions and answers to prepare you for your interview:

1. What is the difference between attention and self-attention?

Attention is a mechanism used in deep learning models to focus on certain parts of an input sequence. It allows the model to selectively pay attention to specific elements within the input, allowing it to better understand and process the data. Self-attention, also known as intra-attention, is a type of attention that focuses on relationships between elements within the same input sequence. This means that instead of focusing on individual elements, self-attention looks at how different elements interact with each other. For example, if there are two words in a sentence, self-attention would look at how those two words relate to one another rather than just looking at them individually. Self-attention can be used to capture long-term dependencies in sequences, which traditional attention mechanisms cannot do.

2. Can you explain how self-attention works in NLP models?

Self-attention is a mechanism used in natural language processing (NLP) models to allow the model to focus on certain parts of an input sequence. It works by allowing the model to attend to different parts of the input at the same time, rather than sequentially. This allows for more efficient and accurate representation of the data.

In self-attention, each word or token in the input sequence is assigned a weight based on its relevance to other words in the sequence. The weights are then used to calculate a score that determines how much attention should be given to each word. This score is then used to determine which words will be attended to when making predictions.

The self-attention mechanism can also be used to capture long-term dependencies between words in a sentence. By attending to multiple words simultaneously, the model can better understand the context of the sentence and make more accurate predictions. Self-attention has been shown to improve performance on many NLP tasks such as machine translation, text summarization, and question answering.

3. How does multi-head attention work?

Multi-head attention is a type of self-attention mechanism that allows for the representation of multiple different relationships between input elements. It works by splitting the query, key, and value vectors into multiple heads, each of which learns to attend to different parts of the input sequence. This allows for more complex representations of the data as well as better generalization across tasks. Each head then produces an output vector, which is concatenated together and passed through a linear layer before being used in the final prediction. By using multiple heads, multi-head attention can capture more complex relationships between input elements than single-head attention.

4. Why do we need to use a feedforward neural network when performing self-attention?

Feedforward neural networks are essential when performing self-attention because they provide the necessary context for understanding the relationships between different elements in a sequence. Self-attention is used to identify patterns and correlations within a sequence, but without the feedforward neural network providing contextual information, it would be difficult to accurately interpret these patterns. The feedforward neural network helps to create an embedding of the data that can then be used by the self-attention mechanism to better understand the underlying structure of the data. Additionally, the feedforward neural network allows for more efficient computation since it reduces the amount of parameters needed to process the data.

5. Is it possible to perform self-attention without using a masking layer? If yes, then why would you want to use one?

Yes, it is possible to perform self-attention without using a masking layer. Self-attention can be used in many different ways and does not require the use of a masking layer. However, when using a masking layer, it helps to prevent the model from attending to future tokens that have yet to be seen. This allows for more accurate predictions as the model will only attend to relevant information. Additionally, using a masking layer can help reduce computational complexity by preventing unnecessary calculations.

6. What are some examples of real-world applications that use self-attention?

Self-attention is a powerful tool that has been used in many real-world applications. One example of this is natural language processing (NLP). Self-attention can be used to better understand the context and meaning of words within sentences, allowing for more accurate translations and text summarization.

Another application of self-attention is computer vision. By using self-attention, computers can learn to identify objects in images with greater accuracy than traditional methods. This technology is being used in autonomous vehicles to help them recognize obstacles on the road and make decisions accordingly.

Finally, self-attention has also been applied to speech recognition. By using self-attention, machines are able to better distinguish between different sounds and accurately transcribe spoken words into text. This technology is being used in virtual assistants such as Siri and Alexa to provide users with more accurate responses.

7. Can you give an example of where self-attention has been used for image classification?

Self-attention has been used for image classification in a variety of ways. One example is the use of self-attention to improve object detection accuracy. In this approach, self-attention is used to learn relationships between objects within an image and then apply those learned relationships to better detect objects in new images. This technique has been shown to improve object detection accuracy by up to 10%.

Another example of self-attention being used for image classification is in the area of semantic segmentation. Here, self-attention is used to identify regions of interest within an image and then classify them based on their content. This can be useful for tasks such as medical imaging where it is important to accurately identify different types of tissue or organs. Self-attention has also been used to improve the accuracy of facial recognition systems by learning relationships between facial features.

8. What are the main challenges faced by self-attention mechanisms?

Self-attention mechanisms are a powerful tool for natural language processing tasks, but they come with their own set of challenges. One of the main challenges is that self-attention models require large amounts of data to train effectively. This can be difficult to obtain in some cases, as it requires a lot of labeled data and resources. Additionally, self-attention models tend to have high computational complexity due to the number of parameters involved. This makes them more difficult to optimize and can lead to longer training times. Finally, self-attention models can suffer from overfitting if not properly regularized. This means that the model may learn patterns from the training data that do not generalize well to unseen data.

9. How can you improve the performance of self-attention based models?

Self-attention based models can be improved in a variety of ways. One way to improve performance is by increasing the number of layers and heads used in the model. Increasing the number of layers allows for more complex relationships between different parts of the input data to be captured, while increasing the number of heads allows for more parallel processing of the data. Additionally, using larger batch sizes during training can help increase the accuracy of the model as it will have access to more data points.

Another way to improve performance is through regularization techniques such as dropout or weight decay. These methods help reduce overfitting and allow the model to generalize better on unseen data. Finally, hyperparameter tuning can also be used to optimize the model’s performance. This involves adjusting parameters such as learning rate, optimizer type, and other hyperparameters to find the best combination that yields the highest accuracy.

10. Can you explain what positional encoding is? Why do we need to use it?

Positional encoding is a technique used in self-attention networks to provide information about the relative or absolute position of words within a sentence. This is necessary because self-attention networks are based on matrix multiplication, which does not inherently take into account the order of words in a sentence. By adding positional encoding to the input data, the network can learn to recognize patterns and relationships between words that depend on their positions in the sentence. Positional encoding also helps the model better understand longer sentences by providing more context for each word.

11. What’s your opinion on the future of self-attention in computer vision and natural language processing?

Self-attention has already made a significant impact in the fields of computer vision and natural language processing, and its future potential is very exciting. Self-attention allows for more efficient computation by allowing models to focus on relevant parts of an input sequence, rather than having to process the entire sequence at once. This makes it possible to build larger and more complex models that can better capture long-term dependencies. Additionally, self-attention can be used to improve existing architectures such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

In terms of computer vision, self-attention can be used to identify objects within images or videos, which could lead to improved object detection and recognition capabilities. In natural language processing, self-attention can help with tasks such as machine translation, question answering, and text summarization. It can also be used to generate more accurate representations of words and sentences, which could lead to improved performance in many NLP tasks.

Overall, self-attention has great potential to revolutionize both computer vision and natural language processing. Its ability to efficiently process large amounts of data and accurately represent relationships between different elements make it a powerful tool for building advanced AI systems.

12. What happens if you remove the masking layer from transformer architectures?

If the masking layer is removed from transformer architectures, it can lead to a decrease in performance. This is because the masking layer helps prevent the model from attending to future tokens when predicting the current token. Without this layer, the model may be able to attend to information that has not yet been seen, which could lead to incorrect predictions. Additionally, without the masking layer, the model would have access to all of its previous states, which could cause it to overfit and become less generalizable. As such, removing the masking layer from transformer architectures should generally be avoided.

13. Why do we need to apply softmax activation before calculating the dot product while performing self-attention?

The softmax activation is necessary when performing self-attention because it ensures that the dot product of two vectors results in a value between 0 and 1. This allows for the calculation of an attention score, which can be used to determine how much importance should be given to each vector. Without the softmax activation, the dot product could result in any number, making it difficult to interpret the relevance of the vectors. Additionally, the softmax activation helps to normalize the values so that they are all on the same scale, allowing for more accurate comparison between different vectors.

14. What are some common mistakes made when implementing self-attention mechanisms?

One of the most common mistakes made when implementing self-attention mechanisms is not properly accounting for context. Self-attention models rely on understanding the relationships between words in a sentence, and if these relationships are not taken into account, the model may fail to accurately capture the meaning of the text. Additionally, it is important to ensure that the attention weights are correctly calculated; otherwise, the model may be unable to focus on the relevant parts of the input.

Another mistake often seen with self-attention implementations is failing to consider the computational complexity of the model. Self-attention models can become computationally expensive very quickly, so it is important to carefully consider the tradeoff between accuracy and speed when designing the architecture. Finally, some implementations may also suffer from overfitting due to the large number of parameters involved in self-attention models. To avoid this issue, it is important to use regularization techniques such as dropout or weight decay.

15. What are some good practices to follow when working with self-attention based models?

When working with self-attention based models, it is important to follow some good practices. Firstly, it is essential to ensure that the data used for training and testing is of high quality. This means that the data should be clean, consistent, and free from any noise or outliers. Additionally, it is important to use a large enough dataset so that the model can learn meaningful patterns from the data.

Secondly, when designing the architecture of the model, it is important to consider the size of the input sequence as well as the number of layers in the network. The larger the input sequence, the more complex the model will need to be in order to capture all the information. Similarly, increasing the number of layers can help improve the performance of the model but may also lead to overfitting if not done carefully.

Thirdly, it is important to pay attention to hyperparameter tuning. Self-attention based models require careful selection of learning rate, batch size, optimizer, etc. in order to achieve optimal performance. It is also important to monitor the training process closely and adjust the parameters accordingly.

Finally, it is important to evaluate the model’s performance on multiple metrics such as accuracy, precision, recall, F1 score, etc. This helps to identify potential areas of improvement and allows for further optimization of the model.

16. Do all sequences have equal importance when performing self-attention? If no, then how can we differentiate between important and unimportant tokens?

No, not all sequences have equal importance when performing self-attention. To differentiate between important and unimportant tokens, attention weights can be used to assign different levels of importance to each token in the sequence. Attention weights are calculated by taking into account the context of the input sequence as well as any other relevant information such as the position of the token within the sequence or its relationship with other tokens. By assigning higher attention weights to more important tokens, we can ensure that they receive greater focus during the self-attention process. Additionally, certain techniques such as multi-head attention can also be used to further refine the attention weights assigned to each token.

17. What are some alternatives to self-attention mechanisms?

One alternative to self-attention mechanisms is convolutional neural networks (CNNs). CNNs are a type of deep learning architecture that uses multiple layers of neurons and filters to extract features from an input. This allows the network to learn complex patterns in data, such as images or text. Another alternative is recurrent neural networks (RNNs), which use feedback loops to process sequences of data. RNNs can be used for tasks such as language translation and speech recognition. Finally, there are also graph neural networks (GNNs) which use graphs to represent relationships between objects. GNNs can be used for tasks such as recommendation systems and knowledge representation.

18. What is the difference between global and local attention?

Global attention is a type of self-attention that looks at the entire sequence when making decisions. It takes into account all elements in the sequence, regardless of their relative position to each other. This allows for more complex relationships between elements to be taken into consideration. Global attention can also help with long-term dependencies and capturing global context.

Local attention, on the other hand, focuses on only a few elements at a time. It pays attention to the local context by looking at the immediate neighbors of an element. This helps capture short-term dependencies and makes it easier to identify patterns within the data. Local attention is often used when dealing with shorter sequences or when there are fewer elements to consider.

19. What are causal masks? How do they help improve the accuracy of self-attention based models?

Causal masks are a type of masking technique used in self-attention based models. They help to ensure that the model only takes into account information from earlier time steps when making predictions about later time steps. This helps to prevent the model from using future information to make decisions, which can lead to inaccurate results. By limiting the amount of information available to the model at any given time step, causal masks help to improve the accuracy of self-attention based models by ensuring that they are not relying on incorrect or outdated information.

20. Can you explain what sparse attention is?

Sparse attention is a type of self-attention mechanism that focuses on specific parts of an input sequence. It works by assigning higher weights to certain tokens in the sequence, while ignoring others. This allows for more efficient computation and better performance when dealing with long sequences. Sparse attention can be used in various tasks such as language modeling, machine translation, and question answering. By focusing on only the most important parts of the input sequence, sparse attention helps reduce computational complexity and improve accuracy.

FAQs

What are the 10 most common interview questions with answers? ›

  • 10 common interview questions and answers. ...
  • Tell me about yourself. ...
  • What attracted you to our company? ...
  • Tell me about your strengths. ...
  • What are your weaknesses? ...
  • Where do you see yourself in five years? ...
  • Can you tell me about a time where you encountered a business challenge?

What is the formula for self attention? ›

This quadratic complexity comes from the self-attention mechanism Attention(Q,K,V)=softmax(QK⊤√dk)V Attention ( Q , K , V ) = softmax ( Q K ⊤ d k ) V .

What is self attention in simple words? ›

Self Attention, also called intra Attention, is an attention mechanism relating different positions of a single sequence in order to compute a representation of the same sequence. It has been shown to be very useful in machine reading, abstractive summarization, or image description generation.

What are the 7 most common interview questions and answers pdf? ›

How to master these 7 common interview questions
  • Where do you see yourself in five years time? ...
  • What are your strengths/weaknesses? ...
  • Why should I hire you? ...
  • Tell me about yourself/your work experience. ...
  • Why do you want this job? ...
  • What are your salary expectations? ...
  • Why are you the right fit to succeed in this role?

What are the top 25 interview questions with answers? ›

Top 25 Interview Questions and How to Answer Them
  • Tell me about yourself.
  • How did you hear about this position?
  • What is your greatest strength?
  • What are your greatest weaknesses?
  • Why do you want to work here?
  • Why are you looking for a new job?
  • What are your career goals?
  • What is your biggest professional accomplishment?

What is self-attention theory? ›

Self-attention theory (Carver, 1979, 1984; Carver & Scheier, 1981; Duval & Wicklund, 1972; Mullen, 1983) is concerned with self-regulation processes that occur as a result of becoming the figure of one's attentional focus.

Why is self-attention important? ›

In layman's terms, the self-attention mechanism allows the inputs to interact with each other (“self”) and find out who they should pay more attention to (“attention”). The outputs are aggregates of these interactions and attention scores.

What is the difference between attention and self-attention? ›

Self-attention is a specific type of attention. The difference between regular attention and self-attention is that instead of relating an input to an output sequence, self-attention focuses on a single sequence. It allows the model to let a sequence learn information about itself.

What is attention examples? ›

Attention is the behavior a person uses to focus the senses, from sight to hearing and even smell. It may focus on information that matters outside of the cab (e.g., signals, traffic), inside the cab (e.g., displays, controls), or on the radio network. Attention to information that is not important is distraction.

What is the complexity of self attention? ›

The complexity of the initial convolution is O(n × d2) and the complexity of the self-attention layers become O((n/k)2 × d), where k is the kernel-size of the convolution layer. Hence the overall complexity becomes O(n × d2 + (n/k)2 × d).

What are the primary types of attention? ›

There are four main types of attention that we use in our daily lives: selective attention, divided attention, sustained attention, and executive attention.

What are the 101 interview questions? ›

Interview Questions and Answers 101
  • What do you know about our business? ...
  • Why do you want this job? ...
  • What are your professional strengths? ...
  • What are your professional weaknesses? ...
  • What is your greatest career achievement? ...
  • What conflict or challenge have you faced in the workplace, and how did you deal with it?
Jan 30, 2018

What are the hardest questions to answer in an interview? ›

The most difficult interview questions (and answers)
  • What is your greatest weakness?
  • Why should we hire you?
  • What's something that you didn't like about your last job?
  • Why do you want this job?
  • How do you deal with conflict with a co-worker?
  • Here's an answer for you.

How do you handle stress and pressure? ›

Taking steps to manage stress
  1. Track your stressors. Keep a journal for a week or two to identify which situations create the most stress and how you respond to them. ...
  2. Develop healthy responses. ...
  3. Establish boundaries. ...
  4. Take time to recharge. ...
  5. Learn how to relax. ...
  6. Talk to your supervisor. ...
  7. Get some support.
Jul 1, 2014

What is your strongest interview question? ›

When answering, mention what your top strengths are, provide examples on how you've used them in the past, and finally, describe the results you've gotten. Be super specific with your answers. Don't just say “I'm good at X” - really dive deep and give the interviewer a comprehensive answer.

What are the 6 C's interview questions? ›

The hiring panel may ask you about the six core values to assess your knowledge. The 6 Cs – care, compassion, courage, communication, commitment, competence - are a central part of 'Compassion in Practice'.

What is the 9 11 question for an interview? ›

Name of person being interviewed: Where were you when the attacks happened? What was your initial reaction? How did it affect the rest of your day? Did you personally know anyone who died in or was personally affected by 9/11?

What is self regulation of attention? ›

Self-regulation is a skill that allows people to manage their emotions, behavior, and body movement when they're faced with a tough situation. It also allows them to do that while staying focused and paying attention.

What are the three main theories of attention? ›

Divided Attention Theories

The research suggests that there are three main factors that impact dual-task performance: 1) how similar the tasks are to one another; 2) how much the subject has practiced the task; and 3) how difficult the tasks are (Anderson, 1995).

What is attentional self regulation? ›

The ability to think, retrieve, and remember information, and to solve problems is dependent on the development of attention, or the ability to focus on something in the environment. 1. Attention regulation is closely related to children's culture, cognitive abilities, and the caregiver-child relationship.

How do you give attention to yourself? ›

To get started with self-care:
  1. Make time for physical activity.
  2. Add mood-boosting foods to your diet.
  3. Give meditation a try.
  4. Write or draw in a mood journal.
  5. Read a book.
  6. Aim to spend 2 hours in nature each week.

What is the difference between self attention and multi head attention? ›

Self-attention means X pays attention to X , as opposed to "normal" attention where X pays attention to Y . Multi-head attention is as opposed to single-head attention. You can choose to use multi- or single-head attention equally for self-attention and for normal-attention.

What are the four 4 types of attention? ›

Four different types of attention
  • Sustained Attention. Sustained attention means focusing on a person, task or activity for a certain time or until the relevant conversation, task or activity is complete. ...
  • Divided Attention. ...
  • Selective Attention. ...
  • Executive Attention.
Nov 10, 2022

How does self attention learn? ›

Self-Attention compares all input sequence members with each other, and modifies the corresponding output sequence positions. In other words, self-attention layer differentiably key-value searches the input sequence for each inputs, and adds results to the output sequence.

What are the four characteristics of attention? ›

The following are some of the characteristics of attention:

(i) Attention is always changing. (ii) Attention is always an active center of our experience. (iii) It is selective. (iv) Attention is continuous.

What are the 5 types of attention? ›

6 Different Types of Attention
  • Sustained Attention.
  • Selective Attention.
  • Divided Attention.
  • Alternating Attention.
  • Visual Attention.
  • Auditory Attention.
Dec 22, 2018

What are the 5 areas of attention? ›

There are five attention dimensions for children: focused attention, sustained attention, selective attention, alternating attention, and divided attention.

What is a real life example of attention? ›

In our daily lives, we direct attention (and, with it, often our eyes) all the time: when searching for a coffee cup in the cupboard, when looking out for cars while crossing the street, or when trying to find a friend at a conference.

What is soft attention vs hard attention? ›

Hard vs Soft attention

in their paper, soft attention is when we calculate the context vector as a weighted sum of the encoder hidden states as we had seen in the figures above. Hard attention is when, instead of weighted average of all hidden states, we use attention scores to select a single hidden state.

Why is self-attention permutation invariant? ›

Self-attention is itself permutation-invariant unless you use positional encoding as often done in language applications. In a way, self-attention “generalises” the summation operation as it performs a weighted summation of different attention vectors.

What is multi headed attention? ›

Multi-head Attention is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension.

What are the three stages of attention? ›

DeGangi and Porges (1990) indicate there are 3 stages to sustained attention which include: attention getting, attention holding, and attention releasing.

What are the two principles of attention? ›

We discuss two general concepts of attention: (1) As a filter of selective attention that selects and admits channels of information from the environment to be processed; (2) as a resource to enable subsequent information processing, constrained by the individual demand of tasks, and particularly the collective demands ...

What are the stages of attention? ›

During the course of a single look, infants will cycle through four phases of attention—stimulus orienting, sustained attention, pre-attention termination, and attention termination. The most relevant of these phases are sustained attention and attention termination.

What are the 3 C's of interview questions? ›

When it comes to interviewing, confidence, competence, and credibility are essential tools for success and often elude even the most experienced investigators.

How do you ace an interview 5 tips? ›

5 tips for a successful job interview
  1. 1 . Be punctual at your interview. It is mandatory to be on time at a job interview. ...
  2. 2 . Do your research on the company. ...
  3. 3 . Don't forget about nonverbal communication. ...
  4. 4 . Be polite with everyone. ...
  5. 5 . Be prepared for your interview.

What are 4 tips for interviewing? ›

During the Interview
  • Plan to arrive early. ...
  • Be prepared to summarize your experience in about 30 seconds and describe what you bring to the position.
  • Listen carefully to each question asked. ...
  • Remain positive and avoid negative comments about past employers.
  • Be aware of your body language and tone of voice.

What are top 10 interview questions? ›

10 classic interview questions
  • Tell me about yourself. ...
  • Why do you want to work for us? ...
  • Give an example of where you've been able to use your leadership skills. ...
  • What are your strengths and weaknesses? ...
  • Where do you see yourself in five years? ...
  • What is your greatest achievement? ...
  • Why should we hire you? ...
  • Are you a team player?

What are the tricky questions asked in interview? ›

15 tricky job interview questions — and how to nail them
  • Can you tell me a little about yourself?
  • How did you hear about the position?
  • What do you know about the company?
  • What are your greatest professional strengths?
  • What do you consider to be your weaknesses?
  • What is your greatest professional achievement?

What's a good weakness to say in an interview? ›

Answer “what is your greatest weakness” by choosing a skill that is not essential to the job you're applying to and by stressing exactly how you're practically addressing your weakness. Some skills that you can use as weaknesses include impatience, multitasking, self-criticism, and procrastination.

What are 5 examples of weaknesses? ›

Examples of Weaknesses.
  • Self-criticism.
  • Shyness.
  • Lack of knowledge of particular software.
  • Public speaking.
  • Taking criticism.
  • Lack of experience.
  • Inability to delegate.
  • Lack of confidence.

What are three strengths? ›

Some examples of strengths you might mention include:
  • Enthusiasm.
  • Trustworthiness.
  • Creativity.
  • Discipline.
  • Patience.
  • Respectfulness.
  • Determination.
  • Dedication.

What is a good strength to say in an interview? ›

Good time-management skills. Strong work ethic and determination to succeed. Creativity and innovative thinking. Good communication skills and ability to work in teams.

What is the best answer of strength and weakness? ›

My Strengths are I am a self-motivated, a hard worker and a quick learner. My Weaknesses are Emotional and over-thinker but I am working on it to control those things. My strength is I'm a dedicated, self-motivated, hardworking person and always chasing new skills.

Why should we hire you? ›

“I should be hired for this role because of my relevant skills, experience, and passion for the industry. I've researched the company and can add value to its growth. My positive attitude, work ethics, and long-term goals align with the job requirements, making me a committed and valuable asset to the company.”

What is the rule of 5 interview? ›

The 5% rule is an HR and staffing rule. Hire to raise the mean of the team, each and every time you hire. Hire the smartest candidates you interview – the top 5%. Microsoft sees, on average, 14,000 resumes per month.

What is the star technique in interviews? ›

The STAR method is a structured manner of responding to a behavioral-based interview question by discussing the specific situation, task, action, and result of the situation you are describing.

How do you see yourself 5 years from now? ›

Sample answer template for “Where do you see yourself in 5 years?” In five years, I see myself continuing to develop my skills and expertise in [list skills related to the role] in a [name future position] contributing to the growth and success of the organization.

How do you say you can work under pressure? ›

How to demonstrate your work under pressure abilities on your resume:
  1. Provide examples of your planning skills.
  2. Explain why you work well under pressure, and how pressure helps you to do your work more efficiently.
  3. Describe your experience working under pressure and how you learn to work well during deadlines.

Top Articles
Latest Posts
Article information

Author: Laurine Ryan

Last Updated: 22/08/2023

Views: 6071

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Laurine Ryan

Birthday: 1994-12-23

Address: Suite 751 871 Lissette Throughway, West Kittie, NH 41603

Phone: +2366831109631

Job: Sales Producer

Hobby: Creative writing, Motor sports, Do it yourself, Skateboarding, Coffee roasting, Calligraphy, Stand-up comedy

Introduction: My name is Laurine Ryan, I am a adorable, fair, graceful, spotless, gorgeous, homely, cooperative person who loves writing and wants to share my knowledge and understanding with you.