NovelVista logo

Top 100+ Latest Python Interview Questions and Answers for 2026

Category | InterView Questions

Last Updated On 18/04/2026

Top 100+  Latest Python Interview Questions and Answers for 2026 | Novelvista

You’ve been coding in Python for months or even years, and you feel confident with the language. But when the interview begins, the questions often take unexpected turns—especially common python interview questions for freshers that test fundamentals in ways you didn’t anticipate.

That's the reality of Python Interview Questions in 2026. They don't just test whether you can write a loop or define a function. They test whether you understand how Python actually works under the hood, how you handle data at scale, and whether you can solve problems cleanly under pressure.

This guide covers 100+ carefully selected Python Interview Questions and Answers, organized from foundational syntax all the way through to advanced architecture and optimization questions. Whether you're a fresher preparing for your first screening call or a senior engineer targeting a lead role, this is your complete preparation reference.

Each section builds on the previous one, matching how real Python interviews are structured. Work through it in order or jump straight to the section that matches your target role.

TL;DR — Quick Summary

SectionFocus AreaWho It's For
Section 1Basic Python syntax, data types, and built-insFreshers and all candidates
Section 2OOP, memory management, decorators, GILMid-level and above
Section 3Pandas, NumPy, data handlingData engineering and analytics roles
Section 4Coding problems and algorithmic patternsAll technical rounds
Section 5Advanced internals, async, closuresSenior engineers
Section 6Architecture, optimization, production scriptingLead and architect roles
Prep TipsStudy strategy by experience levelEveryone

Python Interview Questions for Freshers: Core Concepts Every Candidate Must Know

These are the questions that appear in virtually every Python screening round, regardless of role or seniority. Many of these are standard python interview questions for freshers, designed to test core fundamentals early in the process. Getting them wrong signals weak basics, which is why interviewers use them as a quick filter. Don’t underestimate them.

Q1. What is Python, and what makes it widely used across web development, data engineering, and AI?

Python is a high-level, general-purpose programming language known for its clean syntax and readable code. It's interpreted, dynamically typed, and supports multiple programming styles, including procedural, object-oriented, and functional.

What makes it popular across so many fields comes down to a few things:

  • A massive ecosystem of libraries: Django and FastAPI for web, Pandas and NumPy for data, TensorFlow and PyTorch for AI
  • Readable syntax that reduces the time it takes to write and understand code
  • A large, active community that means most problems already have documented solutions
  • Strong support for rapid prototyping, which is valuable in both startups and research environments

Q2. What is PEP 8 and why does it matter?

PEP 8 is Python's official style guide. It defines conventions for writing readable, consistent Python code, the kind that other developers can pick up and understand without having to decode your formatting choices.

The most important conventions to know:

  • Use 4 spaces for indentation, never tabs
  • Keep lines to a maximum of 79 characters
  • Use lowercase with underscores for variable and function names (snake_case)
  • Use CamelCase for class names
  • Leave two blank lines between top-level functions and classes, one blank line between methods inside a class

PEP 8 matters in interviews because it signals professional coding habits. Interviewers notice when code is consistently formatted versus when it looks like it was written in a hurry.

Q3. Is Python interpreted, compiled, or both?

This is one of the most common Basic Python Interview Questions that trips people up because the answer is "both, in a way."

When you run a Python script, two things happen:

  1. Python compiles your source code into bytecode (.pyc files stored in a pycache directory)
  2. The Python Virtual Machine (PVM) interprets that bytecode line by line at runtime

So Python is technically compiled to bytecode first, then interpreted. The compilation step happens automatically and invisibly. This is why Python is generally described as an interpreted language, even though that's not the complete picture.

Q4. What is the difference between mutable and immutable data types?

A mutable object can be changed after it's created. An immutable object cannot.

Mutable examples:

  • Lists: You can add, remove, or change elements
  • Dictionaries: You can add, update, or delete key-value pairs

Immutable examples:

  • Strings: You can't change a character in place; you create a new string instead
  • Tuples: Once created, the elements cannot be changed

This distinction matters in practice. Immutable objects are safe to use as dictionary keys. Mutable objects are not, because their state can change after being used as a key, which would break the dictionary's internal structure.

Q5. What are the common built-in data types and when do you choose a tuple over a list or a set over a list?

Python's core built-in types are: int, float, str, bool, list, tuple, set, frozenset, dict, and NoneType.

Tuple over a list when:

  • The data shouldn't change after creation (coordinates, RGB values, database records)
  • You need to use the collection as a dictionary key or in a set
  • You want to signal to other developers that this data is fixed

Set over a list when:

  • You need fast membership testing (checking if an item exists is O(1) in a set, O(n) in a list)
  • You need to eliminate duplicates automatically
  • You need to perform union, intersection, or difference operations between collections

Q6. What is the difference between == and is?

This is a question that catches a lot of candidates who haven't thought carefully about Python's object model.

  • == checks value equality: Are the two objects equal in value?
  • is checks identity equality: Are the two objects the same object in memory?

The reason this causes confusion is Python's integer caching. For small integers (typically -5 to 256), Python reuses the same object in memory, so a is b returns True even when a and b were assigned separately. For larger integers or strings, this behavior stops; two variables with the same value are different objects.

The practical rule: use == for value comparisons. The only use is when you specifically want to check identity, most commonly when checking if x is None.

Q7. What does the pass statement do, and when would you use it?

Pass is a no-operation statement. It tells Python, "there's nothing here, keep moving." Python requires at least one statement in certain blocks like function bodies, class definitions, and if branches. Pass satisfies that requirement without doing anything.

Real-world situations where pass makes sense:

  • Defining an abstract base class method that subclasses are expected to override
  • Creating a placeholder function or class during early development, before the implementation is written
  • Writing a try-except block where you deliberately want to silently ignore a specific exception

Q8. What are args and kwargs?

Both are ways to pass a variable number of arguments to a function.

Both are ways to pass a variable number of arguments to a function.

*args collects extra positional arguments into a tuple:

python
def add(*args):
   return sum(args)

add(1, 2, 3)  # returns 6
**kwargs collects extra keyword arguments into a dictionary:
python
def display(**kwargs):
   for key, value in kwargs.items():
       print(f"{key}: {value}")

display(name="Alice", role="Engineer")

Use *args when you don't know how many positional values will be passed. Use **kwargs when you don't know which keyword arguments will be passed. You can use both in the same function, but *args must come before **kwargs in the signature.

Q9. What is the difference between return and yield?

A function with return executes, produces a value, and exits. The next time you call it, it starts from the beginning.

A function with yield produces a value, pauses execution, and resumes from where it left off the next time it's called. A function with yield is a generator function, and calling it returns a generator object.

python
def count_up(n):
   for i in range(n):
       yield i

gen = count_up(3)
next(gen)  # 0
next(gen)  # 1
next(gen)  # 2

Generators are memory-efficient because they produce values one at a time rather than building an entire list in memory upfront. This makes them particularly useful when working with large datasets or infinite sequences.

Q10. What are Python modules and packages, and how does Python resolve imports?

A module is a single Python file containing functions, classes, and variables. A package is a directory containing multiple modules along with an init.py file that tells Python to treat the directory as a package.

When you write import mymodule, Python searches for it in this order:

  1. The current directory
  2. Directories listed in the PYTHONPATH environment variable
  3. The standard library installation directory
  4. Site-packages (where third-party libraries are installed)

This search order is stored in sys.path, which you can inspect and modify at runtime if needed.
 

Common Python Interview Questions: Data Structures, OOP, and Memory

These questions go deeper than syntax. They test whether you understand how Python manages objects, memory, and code structure. Expect these in the second round of most Python interviews, especially for backend and data engineering roles.

Q11. How does Python's memory management work?

Python manages memory through a private heap. A dedicated area of memory that Python controls entirely, separate from the system memory your OS manages.

Here's how the three layers work together:

  • Private heap stores all Python objects and data structures. You never access this directly. Python's memory manager handles all allocation and deallocation
  • Reference counting is the primary cleanup mechanism. Every object tracks how many variables point to it. When that count hits zero, Python immediately frees the memory
  • Cyclic garbage collector handles the edge cases where two objects reference each other in a loop, keeping each other's reference count above zero permanently. The cyclic GC detects these cycles and cleans them up

Why this matters in practice: when you're building long-running services or processing large datasets, understanding this model helps you write code that doesn't quietly accumulate memory over time.

Q12. What is the difference between a shallow copy and a deep copy?

Both create a new object, but they handle nested objects differently.

Shallow copy creates a new container but keeps references to the same inner objects:

python

import copy
original = [[1, 2], [3, 4]]
shallow = copy.copy(original)
shallow[0][0] = 99
print(original)  # [[99, 2], [3, 4]] ………. original is affected

Deep copy creates a completely independent copy of the object and everything nested inside it:

python

deep = copy.deepcopy(original)

deep[0][0] = 99

print(original)  # [[1, 2], [3, 4]] ………. original is not affected

The bug shallow copies cause is subtle and common: you think you have an independent copy, but modifying a nested object modifies the original too. Use deep copy whenever your data structure contains mutable nested objects that you need to modify independently.

Q13. What is a Python dictionary, and how does it maintain insertion order?

A dictionary is a collection of key-value pairs with O(1) average-case lookup, insertion, and deletion, made possible by a hash table under the hood.

Before Python 3.7, dictionaries did not guarantee any particular order. The internal hash table stored items based on hash values, and the iteration order was unpredictable.

From Python 3.7 onwards, dictionaries officially maintain insertion order as part of the language specification. The implementation changed to use a compact array that preserves the sequence in which keys were added, while still maintaining the hash table for fast lookups.

This matters in practice whenever you're iterating over a dictionary, and the order of results is meaningful. For example, when building an ordered configuration or tracking the sequence of events.

Q14. How do you create a class in Python, and what does self mean?

A class is defined using the class keyword. The init method runs automatically when you create an instance and is used to set up the object's initial state.

python

class Employee:
   def __init__(self, name, role):
       self.name = name
       self.role = role
   def describe(self):
       return f"{self.name} works as a {self.role}"

emp = Employee("Alice", "Engineer")

print(emp.describe())

self refers to the specific instance the method is being called on. When you write emp.describe(), Python automatically passes emp as the first argument to describe; that's what self receives. It's not a keyword, just a strong convention. You could name it anything, but you shouldn't.

Q15. How does inheritance work in Python, including multiple inheritance?

Inheritance lets one class (the child) acquire the attributes and methods of another (the parent).

python

class Animal:
   def speak(self):
       return "Some sound"

class Dog(Animal):
   def speak(self):
      return "Woof"

Python also supports multiple inheritance. A class can inherit from more than one parent:

python

class Flyable:
   def move(self):
       return "Flying"

class Swimmable:
   def move(self):
       return "Swimming"

class Duck(Flyable, Swimmable):
   pass
d = Duck()
print(d.move())  # "Flying" ………. Flyable comes first in the MRO

When multiple parents define the same method, Python uses the Method Resolution Order (MRO) to decide which one wins. The MRO follows the C3 linearization algorithm, left-to-right through the parent list, depth before breadth.

Q16. What is the Global Interpreter Lock (GIL) in CPython?

The GIL is a mutex. A lock that allows only one thread to execute Python bytecode at a time, even on a multi-core machine.

It exists because CPython's memory management (specifically, reference counting) is not thread-safe. Without the GIL, two threads modifying the same object's reference count simultaneously could corrupt memory.

Practical implications:

  • For I/O-bound tasks (network calls, file reads), threads work fine because the GIL is released while waiting for I/O
  • For CPU-bound tasks (heavy computation, data processing), threads don't give you real parallelism; use multiprocessing instead, which creates separate processes each with its own GIL

This is one of the most important Common Python Interview Questions for backend and data engineering roles because it directly affects how you architect concurrent systems in Python

Q17. What is the difference between @staticmethod, @classmethod, and a regular instance method?

These three method types serve different purposes:

Instance method (regular):

  • Receives self as the first argument
  • Has access to the instance and can read/modify the instance state
  • Used for behavior that belongs to a specific object

@classmethod:

  • Receives cls (the class itself) as the first argument instead of self
  • Has access to the class but not to any specific instance
  • Commonly used as alternative constructors

@staticmethod:

  • Receives neither self nor cls
  • Just a regular function that lives inside the class namespace for organizational purposes
  • Used for utility functions logically related to the class but not dependent on class or instance state

python
class Date:
   def __init__(self, day, month, year):
       self.day = day
       self.month = month
       self.year = year

   @classmethod
   def from_string(cls, date_string):
       day, month, year = map(int, date_string.split('-'))
       return cls(day, month, year)

   @staticmethod
   def is_valid_year(year):
       return year > 0
 

Q18. What are Python decorators and how do they work?

A decorator is a function that takes another function as input, wraps it with additional behavior, and returns the wrapped version. The @ syntax is just shorthand for passing the function through the decorator.

python

def log_call(func):

    def wrapper(*args, **kwargs):

        print(f"Calling {func.__name__}")

        result = func(*args, **kwargs)

        print(f"Done")

        return result

    return wrapper

@log_call

def process_data(data):

    return data.upper()

This is equivalent to writing process_data = log_call(process_data).

Real-world uses of decorators in production:

  • Adding retry logic to API calls
  • Enforcing authentication on web route handlers
  • Timing function execution for performance monitoring
  • Caching function results to avoid repeated expensive computations

Q19. What are Python generators, and why are they memory-efficient?

A generator is a function that uses yield to produce values one at a time, pausing between each one. Instead of building an entire list in memory and returning it, it produces each value on demand.

Why this matters for memory:

python

# This creates a list of 1 million integers in memory all at once
numbers = [x * 2 for x in range(1_000_000)]

# This creates a generator that produces one value at a time
numbers = (x * 2 for x in range(1_000_000))

The list consumes roughly 8MB. The generator uses almost nothing. It only holds the state needed to produce the next value.

Use a generator over a list comprehension when:

  • The dataset is large and you only need to iterate through it once
  • You're processing a stream of data where you don't know the size upfront
  • You want to compose multiple transformation steps without materializing intermediate results

Q20. What are Python namespaces and the LEGB rule?

A namespace is a mapping from names to objects. Python uses namespaces to keep variable names from different scopes from colliding with each other.

The LEGB rule defines the order Python searches for a variable name:

  • L – Local: Inside the current function
  • E – Enclosing: Inside any enclosing functions (relevant for closures)
  • G – Global: At the module level
  • B – Built-in: Python's built-in names like len, range, print

python
x = "global"

def outer():
   x = "enclosing"
   def inner():
       x = "local"
       print(x)  # prints "local"
   inner()
outer()

Python searches from the inside out. Local first, then Enclosing, then Global, then Built-in. The first match wins. If no match is found anywhere, Python raises a NameError.

Python Data Engineer Interview Questions: Pandas, NumPy, and Data Handling

This section covers the questions you'll face in data engineering, data science, and analytics interviews. If you're targeting a data-focused role, this is your highest-priority preparation area. Interviewers in these rounds often hand you a dataset and ask you to work with it live, knowing the theory isn't enough, you need to be comfortable writing these operations from memory.

Q21. What is a Pandas DataFrame, and how does it differ from a Python list or dictionary?

A DataFrame is a two-dimensional, labeled data structure. Think of it as a table with named columns and indexed rows. It's the primary data structure for working with structured data in Python.

Here's how it differs from native Python structures:

  • list stores values in a single sequence with no column labels or named axes
  • dictionary maps keys to values but doesn't support tabular operations, alignment, or vectorized computation natively
  • DataFrame combines both concepts. It has column names like a dictionary and sequential indexing like a list, plus a massive library of built-in operations for filtering, grouping, merging, and transforming data

In a data pipeline, you'd use a DataFrame when you need to:

  • Load structured data from CSV, JSON, SQL, or Parquet files
  • Apply transformations across entire columns at once
  • Join datasets on common keys
  • Aggregate data by groups for analysis or reporting

Q22. How do you handle missing values in a Pandas DataFrame?

Missing values in Pandas are represented as NaN (Not a Number) for numeric data or None for object types. There are three main approaches:

dropna() removes rows or columns containing missing values:

python
df.dropna()               # drop rows with any NaN
df.dropna(axis=1)         # drop columns with any NaN
df.dropna(thresh=3)       # keep rows with at least 3 non-NaN values

Use this when missing data is random and the rows or columns with missing values aren't important to your analysis.

fillna() replaces missing values with a specified value or strategy:

python
df.fillna(0)              # replace NaN with 0
df.fillna(method='ffill') # forward fill from previous row
df.fillna(df.mean())      # fill with column mean

Use this when you want to preserve all rows and have a reasonable substitute value available.

Imputation replaces missing values with statistically derived values using tools like sklearn's SimpleImputer or IterativeImputer. Use this in machine learning pipelines where the quality of the replacement value matters more than simplicity.

The right choice depends on why the data is missing. If it's missing at random and the dataset is large, dropping is fine. If missing values carry meaning or the dataset is small, filling or imputing is better

Q23. How do you combine multiple Pandas DataFrames?

There are three main methods, and they serve different purposes:

merge() works like a SQL JOIN, combining rows from two DataFrames based on matching values in one or more columns:

python
pd.merge(df1, df2, on='user_id', how='inner')
# how options: inner, left, right, outer

Use this when combining datasets that share a common key column.

join() is a shortcut for merge that works on index values by default:

python
df1.join(df2, how='left')

Use this when your DataFrames share the same index and you want a concise syntax.

concat() stacks DataFrames either vertically (adding more rows) or horizontally (adding more columns):

python
pd.concat([df1, df2], axis=0)  # stack rows
pd.concat([df1, df2], axis=1)  # stack columns

Use this when combining datasets with the same structure. For example, monthly reports are being combined into an annual dataset.

Q24. What is reindexing in Pandas, and why would you use it?

Reindexing means conforming a DataFrame or Series to a new index. It's how you align data to a specific set of labels, whether those labels currently exist in the data or not.

python
df = pd.DataFrame({'score': [85, 90, 78]}, index=['Alice', 'Bob', 'Carol'])
new_index = ['Alice', 'Bob', 'Carol', 'Dave']
df_reindexed = df.reindex(new_index)

In this example, Dave doesn't exist in the original DataFrame. Pandas fills the missing row with NaN by default. You can override this with a fill_value parameter.

When reindexing is useful:

  • Aligning two time series datasets to the same date range before joining them
  • Ensuring a DataFrame has all expected categories, even if some have no data
  • Reordering rows or columns into a specific sequence for reporting

Q25. How do you efficiently load a large CSV into a NumPy array and perform basic operations on it?

For loading, NumPy's loadtxt() and genfromtxt() handle CSV files directly:

python
import numpy as np
data = np.genfromtxt('data.csv', delimiter=',', skip_header=1)

genfromtxt() handles missing values more gracefully than loadtxt(), making it the better choice for real-world data.

Once loaded, common operations:

python
# Sort by first column
sorted_data = data[data[:, 0].argsort()]

# Filter rows where column 2 > 50
filtered = data[data[:, 2] > 50]

# Reshape from 2D to 3D
reshaped = data.reshape(10, 5, -1)
For very large files that don't fit in memory, load in chunks using Pandas with chunksize and convert each chunk to NumPy:
python
for chunk in pd.read_csv('large_file.csv', chunksize=10000):
   arr = chunk.to_numpy()
   # process arr


Q26. What are the advantages of NumPy arrays over Python lists for numerical computation?

This is one of the most direct Python Interview Questions for Data Engineer roles because the answer explains why the entire data science ecosystem is built on NumPy rather than plain Python.

Key advantages:

  • Fixed-type storage: NumPy arrays store elements of a single data type, so each element takes a fixed, predictable amount of memory. Python lists store pointers to objects, which adds significant overhead
  • Vectorization: NumPy operations apply to entire arrays at once using optimized C code, eliminating the need for Python-level loops. A NumPy addition of two arrays is orders of magnitude faster than adding two lists element-by-element in a loop
  • Memory efficiency: A NumPy array of 1 million floats uses roughly 8MB. An equivalent Python list uses around 35MB due to object overhead
  • Broadcasting: NumPy allows arithmetic between arrays of different but compatible shapes without explicit loops or reshaping

python
# Without NumPy – slow Python loop
result = [a + b for a, b in zip(list1, list2)]

# With NumPy – fast vectorized operation
result = array1 + array2

Q27. How do you delete the second column from a NumPy array and replace it with new values?

NumPy doesn't support in-place column deletion directly. You use np.delete() to create a new array without the target column, then np.insert() or np.column_stack() to add the replacement:

python
import numpy as np

arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Delete column at index 1 (second column)
arr_deleted = np.delete(arr, 1, axis=1)
# Result: [[1, 3], [4, 6], [7, 9]]

# New column values
new_col = np.array([[20], [50], [80]])

# Insert at position 1
arr_replaced = np.insert(arr_deleted, 1, new_col.flatten(), axis=1)
# Result: [[1, 20, 3], [4, 50, 6], [7, 80, 9]]

The key thing to be aware of is that np.delete() and np.insert() return new arrays; they don't modify the original. If you're working with large arrays, be conscious of the memory cost of creating multiple copies during this process.

Q28. How do you read data from a publicly shared Google Sheet into a Pandas DataFrame?

A publicly shared Google Sheet can be accessed in CSV format by modifying its sharing URL. The standard approach:

python
import pandas as pd
sheet_id = "your_sheet_id_here"
sheet_name = "Sheet1"
url = f"https://docs.google.com/spreadsheets/d/{sheet_id}/gviz/tq?tqx=out:csv&sheet={sheet_name}"
df = pd.read_csv(url)
 

This works because Google Sheets exposes a CSV export endpoint for publicly shared documents. No authentication is needed for public sheets.

For private sheets, you'd use the Google Sheets API with the google-auth and gspread libraries, which require OAuth2 credentials.

Q29. How do you handle large files in Python without loading everything into memory?

This is one of the most practically important Python Interview Questions for Data Engineer roles. There are several approaches depending on the file type and use case:

Chunked reading with Pandas:

python
chunk_iter = pd.read_csv('large_file.csv', chunksize=50000)
results = []
for chunk in chunk_iter:
   filtered = chunk[chunk['value'] > 100]
   results.append(filtered)
final_df = pd.concat(results)

Python generators for line-by-line processing:

python
def read_large_file(filepath):
   with open(filepath, 'r') as f:
       for line in f:
           yield line.strip()

for line in read_large_file('large_file.txt'):
   process(line)

Dask for out-of-memory DataFrames: Dask provides a Pandas-like API that processes data in chunks automatically, making it suitable for datasets that are too large for RAM but too structured for line-by-line processing.

The general principle: never load more than you need at once. Read in chunks, process each chunk, and accumulate only the results you need.

Q30. How do you find items in one Pandas Series that aren't in another, and items not common to both?

Items in Series A but not in Series B:

python
import pandas as pd

a = pd.Series([1, 2, 3, 4, 5])
b = pd.Series([3, 4, 5, 6, 7])

in_a_not_b = a[~a.isin(b)]
# Result: 1, 2

Items not common to both (symmetric difference):

python
not_common = pd.Series(list(set(a).symmetric_difference(set(b))))
# Result: 1, 2, 6, 7
You can also use numpy's setdiff1d and setxor1d for the same operations on arrays:

python
import numpy as np
np.setdiff1d(a, b)       # in a but not b
np.setxor1d(a, b)        # not common to both

The isin() approach is generally faster for large Series because it's vectorized. The set-based approach is cleaner for smaller datasets where readability matters more than raw speed.

Top 100 Latest Python Interview Questions Commonly Asked in 2026 Interviews

Prepare from basics to advanced with 100 real Python interview questions covering
coding, APIs, data engineering, OOP, performance, and practical interview answers.

Python Coding Interview Questions: Problems, Patterns, and Solutions

This is the section most candidates either love or dread. Live coding rounds test whether you can translate clear thinking into working code under pressure. The good news is that most Python Coding Interview Questions follow repeatable patterns. Once you recognize the pattern, the solution becomes much more approachable.

For each problem below, focus on understanding the approach and the reasoning behind it, not just the code.

Q31. Given a list of integers and a target value, return the indices of the two numbers that add up to the target.

The naive approach uses two nested loops. For each number, check every other number. This works but runs in O(n²) time.

The efficient approach uses a dictionary to store numbers you've already seen:

python
def two_sum(nums, target):
   seen = {}
   for i, num in enumerate(nums):
       complement = target - num
       if complement in seen:
           return [seen[complement], i]
       seen[num] = i

For each number, you calculate what value you'd need to complete the pair, then check if you've seen it already. Dictionary lookups are O(1), so the overall solution runs in O(n) time with O(n) space.

The key insight: instead of looking forward for a match, store what you've already seen and check backward.

Q32. Given a string containing brackets, (), [], {}. Check whether it is balanced and correctly nested.

The right data structure here is a stack. Every time you see an opening bracket, push it. Every time you see a closing bracket, check whether it matches the most recent opening bracket.

python
def is_valid(s):
   stack = []
   mapping = {')': '(', '}': '{', ']': '['}

   for char in s:
       if char in mapping:
           top = stack.pop() if stack else '#'
           if mapping[char] != top:
               return False
       else:
           stack.append(char)
   return not stack

The stack is empty at the end only if every opening bracket was properly closed in the right order. Time complexity is O(n), space is O(n).

Q33. Find the length of the longest substring that contains no repeated characters.

The efficient approach uses a sliding window with two pointers and a set to track characters in the current window:

python
def length_of_longest_substring(s):
   char_set = set()
   left = 0
   max_length = 0

   for right in range(len(s)):
       while s[right] in char_set:
           char_set.remove(s[left])
           left += 1
       char_set.add(s[right])
       max_length = max(max_length, right - left + 1)

   return max_length

The right pointer expands the window. When a duplicate is found, the left pointer shrinks the window until the duplicate is gone. This runs in O(n) time because each character is added and removed from the set at most once.

Q34. Given a list of strings, group words that are anagrams of each other into sublists.

The key insight is finding a hashable representation that's identical for all anagrams of the same word. Sorting the characters works perfectly, every anagram of "eat" sorts to "aet".

python
from collections import defaultdict

def group_anagrams(strs):
   groups = defaultdict(list)

   for word in strs:
       key = tuple(sorted(word))
       groups[key].append(word)

   return list(groups.values())

The sorted characters become a tuple (tuples are hashable, lists are not) which serves as the dictionary key. All words with the same sorted key end up in the same group. Time complexity is O(n * k log k) where k is the maximum word length.

Q35. Design a Least Recently Used cache with O(1) time complexity for both get and put operations.

An LRU cache evicts the least recently used item when it's full. Getting O(1) for both operations requires combining two data structures:

  • dictionary for O(1) key lookup
  • doubly linked list to track usage order. Most recent at the head, least recent at the tail

python
from collections import OrderedDict

class LRUCache:
   def __init__(self, capacity):
       self.cache = OrderedDict()
       self.capacity = capacity

   def get(self, key):
       if key not in self.cache:
           return -1
       self.cache.move_to_end(key)
       return self.cache[key]

   def put(self, key, value):
       if key in self.cache:
           self.cache.move_to_end(key)
       self.cache[key] = value
       if len(self.cache) > self.capacity:
           self.cache.popitem(last=False)

Python's OrderedDict handles the linked list behavior internally. move_to_end() marks an item as recently used. popitem(last=False) removes the oldest item when capacity is exceeded.

Q36. Determine whether a linked list contains a cycle using Floyd's slow and fast pointer algorithm.

The idea: use two pointers moving at different speeds. If there's a cycle, the fast pointer will eventually lap the slow pointer, and they'll meet. If there's no cycle, the fast pointer reaches the end.

python
def has_cycle(head):
   slow = head
   fast = head

   while fast and fast.next:
       slow = slow.next
       fast = fast.next.next
       if slow == fast:
           return True

   return False

This runs in O(n) time and uses O(1) space. The key advantage over a hash set approach which would use O(n) space to track visited nodes. The fast pointer moves two steps at a time, the slow pointer moves one. In a cycle, the gap between them decreases by one each iteration until they meet.

Q37. Convert a deeply nested dictionary into a flat dictionary where nested keys are joined with dot notation.

This is a recursive problem. For each key-value pair, if the value is a dictionary, recurse deeper. If it isn't, add the accumulated key path to the result.

python
def flatten_dict(d, parent_key='', separator='.'):
   items = {}

   for key, value in d.items():
       new_key = f"{parent_key}{separator}{key}" if parent_key else key

       if isinstance(value, dict):
           items.update(flatten_dict(value, new_key, separator))
       else:
           items[new_key] = value

   return items

# Example
nested = {"a": {"b": {"c": 1}, "d": 2}, "e": 3}
print(flatten_dict(nested))
# {"a.b.c": 1, "a.d": 2, "e": 3}

The parent_key accumulates the path as you recurse deeper. When you hit a non-dict value, the full path becomes the key in the flat result.

Q38. Given a list of elements, find the K most frequently occurring items.

The clean Python approach uses Counter from the collections module:

python
from collections import Counter

def top_k_frequent(nums, k):
   count = Counter(nums)
   return [item for item, freq in count.most_common(k)]

Counter.most_common(k) returns the k elements with the highest counts in descending order. It uses a heap internally, giving O(n log k) time complexity, more efficient than sorting all counts when k is much smaller than n.

If you can't use Counter, the manual approach builds a frequency dictionary, then uses heapq.nlargest():

python
import heapq

def top_k_frequent_manual(nums, k):
   freq = {}
   for num in nums:
       freq[num] = freq.get(num, 0) + 1
   return heapq.nlargest(k, freq, key=freq.get)

Q39. Given a list of integers from 1 to N with exactly one number missing, find the missing number.

The mathematical approach uses the fact that the sum of integers from 1 to N equals N * (N + 1) / 2. The difference between that expected sum and the actual sum of your list is the missing number.

python
def find_missing(nums):
   n = len(nums) + 1
   expected_sum = n * (n + 1) // 2
   return expected_sum - sum(nums)

This runs in O(n) time and O(1) space, no sorting, no sets, no extra memory proportional to the input size. The XOR approach is an alternative that also runs in O(n) time and O(1) space, but the sum approach is easier to explain in an interview setting.

Q40. Find the most frequently occurring word in a very large text file in a memory-safe way.

The key here is processing the file line by line rather than loading it all into memory, combined with a counter for efficient frequency tracking:

python
from collections import Counter
import re

def most_frequent_word(filepath):
   word_counts = Counter()

   with open(filepath, 'r') as f:
       for line in f:
           words = re.findall(r'\b[a-z]+\b', line.lower())
           word_counts.update(words)

   return word_counts.most_common(1)[0][0]

Processing line by line means memory usage stays constant regardless of file size. re.findall() with a word boundary pattern handles punctuation and case normalization. Counter.update() accumulates counts incrementally across all lines.

For extremely large files across distributed storage, the production approach would use Apache Spark or a MapReduce pattern. But for a single large file in an interview context, this solution demonstrates the right memory-aware thinking.

Python Coding Interview Questions and Answers: Advanced Internals and Real-World Scenarios

This section separates mid-level candidates from senior ones. The questions here don't just test whether you know a feature exists; they test whether you understand why it exists, how it works internally, and when you'd actually reach for it in production code.

Q41. What is the difference between a generator and an iterator?

This is one of those Python Coding Interview Questions and Answers topics where a lot of candidates use the terms interchangeably and get caught out.

Here's the precise distinction:

  • An iterator is any object that implements two methods: iter() and next(). When next() has nothing left to return, it raises StopIteration.

generator is a special kind of iterator created by a function that uses yield. Generators automatically implement iter() and next() behind the scenes.

python
# Custom iterator ……….. manual implementation
class CountUp:
   def __init__(self, limit):
       self.limit = limit
       self.current = 0

   def __iter__(self):
       return self

   def __next__(self):
       if self.current >= self.limit:
           raise StopIteration
       self.current += 1
       return self.current

# Generator ……….. same behavior, much less code
def count_up(limit):
   for i in range(1, limit + 1):
       yield i

Every generator is an iterator because it implements both required methods. But not every iterator is a generator. A class-based iterator like CountUp above is an iterator but not a generator. Generators are simply the most convenient way to create iterators in Python.

Q42. What is a context manager and how does the with statement work?

A context manager is an object that defines setup and teardown behavior for a block of code. The with statement handles the setup before the block runs and the teardown after it finishes, even if an exception occurs inside the block.

Under the hood, the with statement calls enter() at the start and exit() at the end.

Class-based approach:

python
class ManagedFile:
   def __init__(self, filepath):
       self.filepath = filepath

   def __enter__(self):
       self.file = open(self.filepath, 'r')
       return self.file

   def __exit__(self, exc_type, exc_val, exc_tb):
       self.file.close()
       return False  # don't suppress exceptions

with ManagedFile('data.txt') as f:
   content = f.read()

contextlib approach (simpler for most cases):

python
class ManagedFile:
   def __init__(self, filepath):
       self.filepath = filepath

   def __enter__(self):
       self.file = open(self.filepath, 'r')
       return self.file

   def __exit__(self, exc_type, exc_val, exc_tb):
       self.file.close()
       return False  # don't suppress exceptions

with ManagedFile('data.txt') as f:
   content = f.read()

The contextlib approach is cleaner for simple cases. The class-based approach is better when the context manager needs to maintain state across multiple uses or needs more complex exception handling logic.

Q43. What does duck typing mean in Python?

Duck typing comes from the saying "if it walks like a duck and quacks like a duck, it's a duck." In Python, the type of an object matters less than whether it has the methods or attributes you need.

python
def process(data):
   for item in data:
       print(item)

process([1, 2, 3])        # works ……….. list is iterable
process((1, 2, 3))        # works ……….. tuple is iterable
process("hello")           # works ……….. string is iterable
process({"a": 1, "b": 2}) # works ……….. dict is iterable

The process() function doesn't check whether data is a list or a tuple. It just tries to iterate over it. If the object supports iteration, it works. If it doesn't, Python raises an AttributeError or TypeError at runtime.

How this affects how you write functions:

Rather than writing if isinstance (data, list) checks, you write code that assumes the object has the behavior you need and handles the exception if it doesn't. This makes Python functions naturally more flexible and reusable across different input types.

Q44. When would you use multiprocessing versus multithreading in Python?

The answer comes back to the GIL.

Use multithreading when:

  • The task is I/O-bound. Reading files, making network requests, waiting for database queries
  • Threads spend most of their time waiting, not computing
  • The GIL is released during I/O operations, so multiple threads make real progress concurrently

python
import threading

def fetch_url(url):
   # I/O-bound ……….. threading works well here
   response = requests.get(url)
   return response.status_code

threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]

Use multiprocessing when:

  • The task is CPU-bound. Heavy computation, data transformation, image processing
  • Each process gets its own Python interpreter and its own GIL
  • True parallelism across CPU cores is possible

python
from multiprocessing import Pool

def process_chunk(data_chunk):
   # CPU-bound ……….. multiprocessing gives real parallelism
   return [x ** 2 for x in data_chunk]

with Pool(processes=4) as pool:
   results = pool.map(process_chunk, chunks)

The practical rule is straightforward: I/O-bound tasks use threads, CPU-bound tasks use processes. Mixing them up is one of the most common performance mistakes in Python concurrency.

Q45. What is Method Resolution Order (MRO) and how does C3 linearization work?

MRO is the order in which Python searches through a class hierarchy to find a method or attribute. It matters most in multiple inheritance scenarios where the same method name exists in more than one parent class.

Python uses the C3 linearization algorithm to determine MRO. The result always follows these rules:

  • A class always appears before its parents
  • The order of parent classes in the class definition is preserved
  • No class appears before another class that depends on it

python
class A:
   def hello(self):
       return "A"

class B(A):
   def hello(self):
       return "B"

class C(A):
   def hello(self):
       return "C"

class D(B, C):
   pass

print(D.__mro__)
# (D, B, C, A, object)
print(D().hello())
# "B" ……….. B comes first in the MRO

You can inspect the MRO of any class using ClassName.mro or ClassName.mro(). When designing class hierarchies, it's worth checking the MRO explicitly to confirm which method will actually be called when names collide across parents.

Q46. What are Python metaclasses?

A metaclass is the class of a class. Just as a regular class defines how its instances behave, a metaclass defines how a class itself behaves. Including how it's created, what attributes it has, and what happens when you subclass it.

In Python, the default metaclass for all classes is type. When you write class MyClass: pass, Python is effectively calling type('MyClass', (object,), {}) behind the scenes.

python
class SingletonMeta(type):
   _instances = {}

   def __call__(cls, *args, **kwargs):
       if cls not in cls._instances:
           cls._instances[cls] = super().__call__(*args, **kwargs)
       return cls._instances[cls]

class DatabaseConnection(metaclass=SingletonMeta):
   pass

db1 = DatabaseConnection()
db2 = DatabaseConnection()
print(db1 is db2)  # True ……….. same instance returned both times

When metaclasses are actually used in production:

  • Enforcing that subclasses implement specific methods (similar to abstract base classes)
  • Automatically registering subclasses in a plugin or handler registry
  • Adding validation or transformation to class attributes at class creation time

Metaclasses are powerful but complex. For most use cases, class decorators or init_subclass() are simpler alternatives that achieve the same result with less indirection.

Q47. What is late binding in Python closures and how does the lambda-in-a-loop bug occur?

Late binding means Python closures look up variable values at the time the function is called, not at the time it's defined.

This produces a classic bug when creating functions inside a loop:

python
functions = []
for i in range(5):
   functions.append(lambda: i)

print([f() for f in functions])
# [4, 4, 4, 4, 4] ……….. not [0, 1, 2, 3, 4]

All five lambdas reference the same variable i. By the time any of them are called, the loop has finished and i equals 4. Every lambda returns 4.

The fix is to capture the current value of i at definition time using a default argument:
python
functions = []
for i in range(5):
   functions.append(lambda x=i: x)

print([f() for f in functions])
# [0, 1, 2, 3, 4]

Default argument values are evaluated at function definition time, not call time. So each lambda captures its own copy of i's value at the moment it was created.

Q48. What are slots in Python?

By default, Python stores instance attributes in a dictionary called dict on each object. This gives you the flexibility to add attributes dynamically but comes with memory overhead; the dictionary itself takes space, and each attribute lookup involves dictionary operations.

slots replaces dict with a fixed set of attributes defined at class creation time:

python
class Point:
   __slots__ = ['x', 'y']

   def __init__(self, x, y):
       self.x = x
       self.y = y

p = Point(1, 2)
p.z = 3  # raises AttributeError ……….. z is not in __slots__

Benefits of slots:

  • Reduced memory usage per instance, significant when creating millions of objects
  • Slightly faster attribute access due to a simpler lookup mechanism
  • Prevents accidental creation of new attributes through typos

Trade-offs:

  • You can no longer add attributes dynamically
  • Inheritance with slots requires careful handling. If a parent class doesn't define slots, the subclass still has dict
  • Makes the class less flexible for general-purpose use

Use slots when you're creating large numbers of instances of a class with a fixed, known set of attributes. Configuration objects, data records, and coordinate or point classes are common examples.

Q49. What is monkey patching in Python?

Monkey patching means dynamically modifying a class or module at runtime. Replacing or adding attributes, methods, or behaviors after the code has been loaded.

A legitimate use case (patching in tests):

python
import requests

def mock_get(url):
   class MockResponse:
       status_code = 200
       def json(self):
           return {"result": "mocked"}
   return MockResponse()

# In a test
requests.get = mock_get

response = requests.get("https://api.example.com/data")
print(response.json())  # {"result": "mocked"}

This lets you test code that makes HTTP requests without making real network calls. Libraries like unittest.mock provide a cleaner, more controlled way to do the same thing.

Risks in production code:

  • Makes code harder to understand, behavior changes at runtime in ways that aren't obvious from reading the source
  • Creates fragile dependencies between components that weren't designed to interact
  • Can break silently when the patched library updates its internal structure

The general principle: monkey patching is acceptable in tests, questionable in application code, and almost always a sign of a design problem in production systems.

Q50. What is the difference between async/await and multithreading in Python?

Both allow a program to work on multiple things without waiting for each one to finish. But they do it in completely different ways.

Multithreading uses OS-managed threads. The OS switches between threads, and each thread can be interrupted at any point. This context switching has overhead, and shared state between threads requires locks to prevent race conditions.

Async/await uses cooperative concurrency. A single thread runs an event loop, and tasks voluntarily yield control when they're waiting for something (like a network response). There's no OS context switching and no shared state problems because only one coroutine runs at a time.

python
import asyncio
import aiohttp

async def fetch(session, url):
   async with session.get(url) as response:
       return await response.text()

async def main():
   async with aiohttp.ClientSession() as session:
       results = await asyncio.gather(
           fetch(session, "https://api1.example.com"),
           fetch(session, "https://api2.example.com"),
       )
   return results

When async/await is the better choice:

  • Making many simultaneous network requests
  • Building high-concurrency web servers or API clients
  • Any I/O-bound workload where you control the entire call stack

Where it offers no advantage:

  • CPU-bound computation, a single-threaded event loop can't parallelize heavy computation
  • Code that calls synchronous blocking libraries. One blocking call blocks the entire event loop

The practical summary: async/await handles high-concurrency I/O more efficiently than threads, but only works well when the entire chain of calls is async-compatible.

Advanced Python Interview Questions: Architecture, Optimization, and Scripting

These are the questions asked at the senior and lead engineer levels. They go beyond knowing what Python features do. They test whether you can make sound architectural decisions, write production-ready code, and reason clearly about performance, reliability, and maintainability.

Q51. How do you optimize a slow Python script?

The first rule of optimization is: don't guess. Profile first, then fix what the data tells you is slow.

Step 1 – Profile with cProfile:

python
import cProfile
cProfile.run('your_function()')
cProfile gives you a breakdown of how much time was spent in each function call. Focus on the functions with the highest cumulative time, not just the ones called most frequently.
Step 2 – Line-level profiling with line_profiler:
python
# Install: pip install line_profiler
# Decorate the function you want to profile
@profile
def slow_function():
   ...
# Run with: kernprof -l -v script.py

Common bottlenecks and their fixes:

  • Python loops over large datasets: Replace with NumPy vectorized operations or list comprehensions
  • Repeated dictionary or set lookups inside loops that could be moved outside
  • Unnecessary object creation: Inside tight loops, create objects once and reuse them
  • Slow I/O: Use buffered reads, async I/O, or batch operations instead of reading one record at a time
  • String concatenation in loops: Use ''.join(list) instead of += inside a loop, which creates a new string object on every iteration

The highest-impact optimizations in most Python scripts come from algorithmic improvements;  replacing an O(n²) approach with an O(n log n) one delivers far more than any micro-optimization.

Q52. What is the difference between deep and lazy imports in Python?

Deep imports (also called eager imports) load a module and all its dependencies at import time. This is the default Python behavior.

python
import pandas as pd  # loads the entire pandas library immediately

Lazy imports defer the import until the module is actually needed at runtime:

python
def process_data(filepath):
   import pandas as pd  # only imported when this function is called
   return pd.read_csv(filepath)

Why lazy imports improve startup time:

In large applications, importing everything at startup can add hundreds of milliseconds before the application is ready. Lazy imports mean only the modules needed for the current operation are loaded, everything else waits.

Trade-offs:

  • Import errors surface at runtime rather than at startup, making them harder to catch early
  • The first call to a lazily imported function is slower because it includes import time
  • Code is slightly harder to read because imports are scattered rather than grouped at the top

A practical middle ground: use lazy imports for heavy optional dependencies (large ML libraries, visualization tools) and keep core imports at the top of the file.

Q53. How do you handle memory leaks in Python?

Memory leaks in Python are less common than in languages without garbage collection, but they do happen, particularly in long-running services.

Common causes:

  • Reference cycles that the GC misses: Objects that reference each other through del methods can prevent cleanup
  • Growing caches without bounds: Dictionaries or lists used as caches that accumulate entries and never evict old ones
  • Event listeners or callbacks: Registering callbacks that hold references to objects, preventing them from being collected
  • Global variables that accumulate data over time

Tools for detection:

python
import tracemalloc

tracemalloc.start()
# ... run your code ...
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
for stat in top_stats[:5]:
   print(stat)

tracemalloc shows you exactly which lines of code are allocating memory and how much. For more complex analysis, memory_profiler and objgraph help you visualize reference counts and identify what's holding objects in memory.

Fixing a reference cycle the GC isn't catching:

Use weakref.ref() to create weak references, references that don't increment an object's reference count, allowing it to be collected when nothing else holds a strong reference to it.

Q54. What are Python design patterns and when would you use Singleton, Factory, and Observer?

Design patterns are reusable solutions to recurring software design problems. Python's dynamic nature means some classical patterns are built into the language already, but three are worth knowing explicitly.

Singleton – Ensure only one instance of a class exists:

python
class DatabaseConnection:
   _instance = None

   def __new__(cls):
       if cls._instance is None:
           cls._instance = super().__new__(cls)
       return cls._instance

db1 = DatabaseConnection()
db2 = DatabaseConnection()
print(db1 is db2)  # True

Use when you need a single shared resource. Database connections, configuration managers, logging handlers.

Factory – Create objects without specifying the exact class:

python
class NotificationFactory:
   @staticmethod
   def create(channel):
       if channel == 'email':
           return EmailNotification()
       elif channel == 'sms':
           return SMSNotification()
       raise ValueError(f"Unknown channel: {channel}")

notifier = NotificationFactory.create('email')

Use when object creation logic is complex or when you want to decouple the code that uses an object from the code that creates it.

Observer – Notify multiple objects when state changes:

python
class EventEmitter:
   def __init__(self):
       self._listeners = {}

   def on(self, event, callback):
       self._listeners.setdefault(event, []).append(callback)

   def emit(self, event, data=None):
       for callback in self._listeners.get(event, []):
           callback(data)

emitter = EventEmitter()
emitter.on('data_ready', lambda d: print(f"Processing: {d}"))
emitter.emit('data_ready', {'records': 1000})

Use when multiple components need to react to state changes without being tightly coupled to the component generating those changes.

Pythonic alternatives: Many classical patterns are unnecessary in Python. Singleton can be replaced with a module-level variable. The factory can be replaced with a dictionary mapping names to classes. Strategy can be replaced with first-class functions.

Q55. How do you write a Python script that is executable on Linux or macOS?

Three things are required to make a Python script directly executable from the command line:

Step 1 – Add a shebang line as the first line of the script:

python
#!/usr/bin/env python3
This tells the OS which interpreter to use when the file is executed directly. Using /usr/bin/env python3 rather than a hardcoded path like /usr/bin/python3 makes the script portable across machines with Python installed in different locations.
Step 2 – Make the file executable:
bash
chmod +x script.py
Step 3 – Handle command-line arguments using argparse:

python
#!/usr/bin/env python3
import argparse

parser = argparse.ArgumentParser(description='Process a data file')
parser.add_argument('filepath', help='Path to input file')
parser.add_argument('--verbose', action='store_true', help='Enable verbose output')
args = parser.parse_args()

print(f"Processing {args.filepath}")
if args.verbose:
   print("Verbose mode enabled")

argparse is the right choice over sys.argv for any script with more than one argument. It automatically generates help text, validates input types, and produces clear error messages when required arguments are missing.

Q56. What is the difference between dict.get(key) and dict[key], and how do mutable default arguments cause bugs?

dict.get(key) vs dict[key]:

python
data = {'name': 'Alice'}

print(data['age'])        # raises KeyError …………. key doesn't exist
print(data.get('age'))    # returns None …………. no error
print(data.get('age', 0)) # returns 0 …………. custom default

Use dict[key] when the key must exist and its absence is a genuine error. Use dict.get(key) when the key might not be present and you want a fallback value instead of an exception.

The mutable default argument bug:

python
def add_item(item, collection=[]):
   collection.append(item)
   return collection

print(add_item('a'))  # ['a']
print(add_item('b'))  # ['b', 'a'] …………. wait, what?
print(add_item('c'))  # ['c', 'b', 'a'] …………. this keeps growing

Default argument values are evaluated once when the function is defined, not each time it's called. The same list object is reused across all calls. This is one of the most consistently surprising behaviors in Python for developers who haven't encountered it before.

The fix:

python
def add_item(item, collection=None):
   if collection is None:
       collection = []
   collection.append(item)
   return collection

Use None as the default and create a fresh mutable object inside the function body. This pattern applies to lists, dictionaries, and sets used as default arguments.

Q57. How do you handle logging in Python scripts?

Using print() for debugging and monitoring is fine for quick scripts. For anything running in production, Python's logging module is the right tool.

What logging provides that print doesn't:

  • Severity levels: DEBUG, INFO, WARNING, ERROR, CRITICAL. It lets you filter output by importance
  • Automatic timestamps and source location in log messages
  • Multiple output handlers, write to the console, a file, and a remote service simultaneously
  • The ability to turn off debug-level messages in production without changing code

A sensible production logging configuration:

python
import logging

logging.basicConfig(
   level=logging.INFO,
   format='%(asctime)s %(levelname)s %(name)s %(message)s',
   handlers=[
       logging.StreamHandler(),
       logging.FileHandler('app.log')
   ]
)

logger = logging.getLogger(__name__)

logger.info("Application started")
logger.warning("Config file not found, using defaults")
logger.error("Database connection failed", exc_info=True)

Using __name__ as the logger name means each module gets its own logger, making it easy to trace which part of the application a log message came from. exc_info=True includes the full traceback in error log entries.

Manual implementation:

python
import time
import requests

def fetch_with_retry(url, max_retries=5, base_delay=1):
   for attempt in range(max_retries):
       try:
           response = requests.get(url, timeout=10)
           response.raise_for_status()
           return response.json()
       except requests.exceptions.HTTPError as e:
           if e.response.status_code == 404:
               raise  # don't retry on 404 …………. it won't resolve itself
           wait_time = base_delay * (2 ** attempt)
           time.sleep(wait_time)
       except requests.exceptions.ConnectionError:
           wait_time = base_delay * (2 ** attempt)
           time.sleep(wait_time)
   raise Exception(f"Failed after {max_retries} attempts")

Using the tenacity library for cleaner implementation:

python
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type

@retry(
   stop=stop_after_attempt(5),
   wait=wait_exponential(multiplier=1, min=1, max=30),
   retry=retry_if_exception_type(requests.exceptions.ConnectionError)
)
def fetch_data(url):
   response = requests.get(url)
   response.raise_for_status()
   return response.json()

The key design decision is knowing which errors to retry. Transient errors like connection timeouts and 503 responses are worth retrying. Client errors like 400 and 404 are not; retrying won't change the outcome.

Q59. How do you write unit tests for a Python function using unittest or pytest?

Basic pytest test:

python
def add(a, b):
   return a + b

def test_add_positive_numbers():
   assert add(2, 3) == 5

def test_add_negative_numbers():
   assert add(-1, -1) == -2

def test_add_zero():
   assert add(0, 5) == 5

Run with pytest test_file.py. pytest automatically discovers and runs any function prefixed with test_.

Mock vs stub:

  • mock is an object that records how it was called: You can assert that specific methods were called with specific arguments
  • stub is an object that returns predefined responses: You use it to replace a dependency with controlled, predictable behavior

Testing a function that makes API calls:

python
from unittest.mock import patch, MagicMock

def get_user(user_id):
   response = requests.get(f"https://api.example.com/users/{user_id}")
   return response.json()

def test_get_user():
   mock_response = MagicMock()
   mock_response.json.return_value = {"id": 1, "name": "Alice"}

   with patch('requests.get', return_value=mock_response):
       result = get_user(1)

   assert result["name"] == "Alice"

The patch context manager replaces requests, get with a mock for the duration of the test. The real API is never called. This makes tests fast, reliable, and independent of external services.

Q60. How do you manage dependencies and virtual environments in a Python project at scale?

This is one of those Advanced Python Interview Questions where the right answer depends on team size, project complexity, and deployment environment.

Virtual environments isolate project dependencies from the system Python installation and from each other:
 bash
python -m venv venv
source venv/bin/activate    # Linux/macOS
venv\Scripts\activate       # Windows

Comparing the main dependency management tools:

  • pip is the baseline tool. Simple, widely supported, but requirements.txt doesn't lock transitive dependencies reliably
  • pipenv combines pip and virtual environment management into one tool. Generates a Pipfile.lock for reproducible installs. Good for application development
  • poetry handles dependency resolution, version constraints, and package publishing in one tool. Generates poetry.lock for exact reproducibility. Well-suited for both applications and libraries
  • conda manages both Python packages and non-Python dependencies (C libraries, CUDA drivers). Essential for data science and machine learning environments where non-Python system dependencies matter

Ensuring reproducible builds:

bash
# With pip
pip freeze > requirements.txt
pip install -r requirements.txt

# With poetry
poetry export -f requirements.txt --output requirements.txt
poetry install --no-root

For production deployments, always pin exact versions in your lock file and commit it to version control. The development environment should install from the lock file, not resolve dependencies fresh each time. Otherwise, a new minor version of a transitive dependency can break a production deployment without any changes to your own code.

How to Prepare for Python Interviews: Practical Tips by Experience Level

Knowing the answers is one part of interview preparation. Showing up ready to have a real technical conversation, think through problems clearly, and communicate your reasoning is equally important. Here's a preparation strategy tailored to where you are right now.

For Freshers Targeting Basic Python Interview Questions

If you're preparing for your first Python role, the foundational sections are your highest priority, and they're also where most early-career candidates lose interviews they could have won.

What to focus on:

  • Master Sections 1 and 2 completely before moving anywhere else. Interviewers use Basic Python Interview Questions to filter candidates in the first 10 minutes. Weak answers on mutability, scope, or the GIL signal that the rest of the conversation isn't worth having
  • Practice writing code in a plain text editor or on paper. Don't rely on IDE autocompletion to get syntax right. In a live coding round, you won't have it
  • Be able to explain three topics without hesitation: the GIL, mutability, and the LEGB scope resolution rule. These appear in almost every Python screening call

Common mistakes to avoid:

  • Memorizing answers without understanding them. Interviewers follow up. If your answer is memorized rather than understood, the follow-up question will make that obvious
  • Skipping the coding problems in Section 4. Even entry-level roles include at least one live coding question. Two Sum, Valid Parentheses, and finding missing numbers are genuine entry-level questions, not advanced ones
  • Underestimating PEP 8 and code style questions. They seem trivial, but signal whether you've written Python in a real environment or just learned it from tutorials

For Mid-Level Engineers Targeting Data Engineering Roles

If you're aiming for a data engineering, analytics engineering, or data science role, Section 3 and Section 4, which covers important python data science interview questions, are your core preparation areas.

What to focus on:

  • Get genuinely comfortable writing Pandas operations on real datasets, not just knowing what the methods do. Interviewers in Python Interview Questions for Data Engineer rounds frequently give you a small CSV file and ask you to clean, transform, and analyze it live
  • Know the time complexity of your solutions. Data engineering interviews regularly ask you to compare two approaches. "This works, but is it efficient for a 10GB dataset?" is a question you should be able to answer confidently
  • Practice chunked file processing and memory-safe data handling. Q29 in this guide covers the pattern.

Specific topics that consistently appear in Python Interview Questions for Data Engineer rounds:

  • Handling missing values and explaining when you'd drop versus fill versus impute
  • The difference between merge(), join(), and concat() and when each is appropriate
  • How NumPy vectorization works and why it's faster than Python loops
  • Reading and processing large files without loading them entirely into memory

One preparation habit that separates strong data engineering candidates: practice narrating what you're doing as you write code. Interviewers in data roles aren't just evaluating whether your code works; they're evaluating whether you can explain your reasoning to non-technical stakeholders.

For Senior Engineers Targeting Advanced Python Interview Questions

If you're preparing for a senior, lead, or architect-level role, Sections 5 and 6 are where the interview is actually won or lost. Advanced Python Interview Questions at this level don't just test knowledge; they test judgment.

What to focus on:

  • Prepare to discuss trade-offs, not just implementations: For every topic in Sections 5 and 6, practice answering a follow-up question: "When would you not use this?" Knowing when not to use a metaclass, when async/await doesn't help, or when multiprocessing adds more overhead than it saves is what separates senior candidates from mid-level ones
  • Have at least one real production story for each advanced topic: A concrete example of using generators to process a large dataset, catching a memory leak in a long-running service, or designing a retry mechanism for a flaky third-party API is far more convincing than a textbook answer
  • Know your profiling and debugging tools well enough to walk through a real optimization scenario. Interviewers at the senior level often present a hypothetical slow or memory-hungry script and ask how you'd diagnose and fix it

Topics that consistently separate senior candidates from mid-level ones:

  • The practical implications of the GIL on production system architecture, not just what it is, but how it shapes design decisions
  • Memory management and how to detect and fix leaks in long-running services
  • Async/await versus threading trade-offs and the scenarios where each is the right choice
  • Writing testable, observable, production-ready Python. Logging, retry logic, dependency management, and unit testing with mocks

General Preparation Advice for All Levels

Regardless of your experience level, a few habits consistently improve interview performance across all Python Interview Questions categories:

  • Solve at least 20 coding problems before any technical round: The patterns in Section 4: sliding window, two pointers, hash maps for O(n) lookups, and stack-based parsing appear repeatedly across Python Coding Interview Questions. Recognizing a pattern takes the pressure off because you're applying a known approach rather than inventing a solution from scratch.
  • Review questions and answers from multiple sources: The same underlying concept appears in many different phrasings. "What is the GIL?" and "Why can't Python use multiple CPU cores effectively?" are the same question. Recognizing the underlying concept matters more than memorizing a specific answer. Working through Python Interview Questions and Answers from several sources builds that pattern recognition.
  • Practice explaining your thinking out loud: Python technical interviews increasingly evaluate communication alongside code correctness. A candidate who talks through their reasoning, flags trade-offs as they write, and asks clarifying questions before diving in consistently performs better than a candidate who writes correct code in silence. Record yourself solving a problem and listen back; most people are surprised by how much they skip over in their explanation.
  • Time yourself on coding problems: Most live coding rounds give you 20 to 30 minutes per problem. Solving problems correctly but slowly won't serve you well. Practice under time pressure so the mechanics of writing Python feel automatic, and you can spend your mental energy on the problem rather than the syntax.
  • Revisit weak areas with real code, not just reading: Reading about generators, closures, or metaclasses builds familiarity. Writing a generator that processes a real dataset, implementing a closure with late binding, and then fixing the bug, or building a simple metaclass that enforces method presence, these activities build genuine understanding. Python Coding Interview Questions and Answers are best absorbed by doing, not just reading.

Conclusion

Python interviews in 2026 test a wide range of skills. From Basic Python Interview Questions on syntax and data types, through data handling and algorithmic problem-solving, all the way to production-level architecture and optimization thinking.

The questions in this guide cover the full spectrum of what interviewers actually assess. The strongest candidates across all six sections share one thing in common: they understand why Python features exist, not just what they do. That depth of understanding is what turns a passing interview into an exceptional one.

For data engineering candidates, Section 3 and Section 4 are your highest-priority preparation investments. For senior candidates, Sections 5 and 6 are where the real differentiation happens. For everyone, the foundational sections are non-negotiable. No amount of advanced knowledge compensates for shaky fundamentals when an interviewer starts probing.

Work through this guide section by section. Write actual code for every question you can't answer confidently from memory. Revisit any section where your answers feel surface-level. The goal isn't to memorize answers; it's to build the genuine understanding that makes any variation of these questions approachable.

Next Step

Python is the language that powers Generative AI and knowing it well puts you in a strong position to build real AI systems, not just use them.

NovelVista's Generative AI Professional Certification takes your Python knowledge further, covering LLM integration, RAG pipelines, agent frameworks, and production deployment in a structured, hands-on curriculum built around what hiring managers actually look for in 2026.

Explore NovelVista's Generative AI Professional Certification today.


Author Details

Akshad Modi

Akshad Modi

AI Architect

An AI Architect plays a crucial role in designing scalable AI solutions, integrating machine learning and advanced technologies to solve business challenges and drive innovation in digital transformation strategies.

Sign Up To Get Latest Updates on Our Blogs

Stay ahead of the curve by tapping into the latest emerging trends and transforming your subscription into a powerful resource. Maximize every feature, unlock exclusive benefits, and ensure you're always one step ahead in your journey to success.

Topic Related Blogs
 
Latest Python Interview Questions and Answers | 2026