To get started with MLflow you'll need to install the MLflow Python package.
For this we recommend creating a separate Python environment, for example, you can use conda environments, and then install the package there with pip or conda.
Once you installed the package, run the command mlflow --version and check the output.
What's the version that you have?
import mlflow
mlflow.__version__
'2.13.0'
Answer of Q1: 2.13.0
Q2. Download and preprocess the data
We'll use the Green Taxi Trip Records dataset to predict the duration of each trip.
Download the data for January, February and March 2023 in parquet format from here.
Use the script preprocess_data.py located in the folder homework to preprocess the data.
The script will:
load the data from the folder <TAXI_DATA_FOLDER> (the folder where you have downloaded the data), fit a DictVectorizer on the training set (January 2023 data), save the preprocessed datasets and the DictVectorizer to disk. Your task is to download the datasets and then execute this command:
python preprocess_data.py --raw_data_path <TAXI_DATA_FOLDER> --dest_path ./output Tip: go to 02-experiment-tracking/homework/ folder before executing the command and change the value of <TAXI_DATA_FOLDER> to the location where you saved the data.
We will train a RandomForestRegressor (from Scikit-Learn) on the taxi dataset.
We have prepared the training script train.py for this exercise, which can be also found in the folder homework.
The script will:
load the datasets produced by the previous step, train the model on the training set, calculate the RMSE score on the validation set. Your task is to modify the script to enable autologging with MLflow, execute the script and then launch the MLflow UI to check that the experiment run was properly tracked.
Tip 1: don't forget to wrap the training code with a with mlflow.start_run(): statement as we showed in the videos.
Tip 2: don't modify the hyperparameters of the model to make sure that the training will finish quickly.
What is the value of the min_samples_split parameter:
Now we want to manage the entire lifecycle of our ML model. In this step, you'll need to launch a tracking server. This way we will also have access to the model registry.
Your task is to:
launch the tracking server on your local machine, select a SQLite db for the backend store and a folder called artifacts for the artifacts store. You should keep the tracking server running to work on the next two exercises that use the server.
In addition to backend-store-uri, what else do you need to pass to properly configure the server?
default-artifact-root
serve-artifacts
artifacts-only
artifacts-destination
Answer of Q4: default-artifact-root
Q5. Tune model hyperparameters
Now let's try to reduce the validation error by tuning the hyperparameters of the RandomForestRegressor using hyperopt. We have prepared the script hpo.py for this exercise.
Your task is to modify the script hpo.py and make sure that the validation RMSE is logged to the tracking server for each run of the hyperparameter optimization (you will need to add a few lines of code to the objective function) and run the script without passing any parameters.
After that, open UI and explore the runs from the experiment called random-forest-hyperopt to answer the question below.
Note: Don't use autologging for this exercise.
The idea is to just log the information that you need to answer the question below, including:
the list of hyperparameters that are passed to the objective function during the optimization, the RMSE obtained on the validation set (February 2023 data). What's the best validation RMSE that you got?
100%|██████████| 15/15 [00:46<00:00, 3.13s/trial, best loss: 5.335419588556921]
Answer of Q5: 5.335
Q6. Promote the best model to the model registry
The results from the hyperparameter optimization are quite good. So, we can assume that we are ready to test some of these models in production. In this exercise, you'll promote the best model to the model registry. We have prepared a script called register_model.py, which will check the results from the previous step and select the top 5 runs. After that, it will calculate the RMSE of those models on the test set (March 2023 data) and save the results to a new experiment called random-forest-best-models.
Your task is to update the script register_model.py so that it selects the model with the lowest RMSE on the test set and registers it to the model registry.
Tip 1: you can use the method search_runs from the MlflowClient to get the model with the lowest RMSE,
Tip 2: to register the model you can use the method mlflow.register_model and you will need to pass the right model_uri in the form of a string that looks like this: "runs:/<RUN_ID>/model", and the name of the model (make sure to choose a good one!).
The goal of this homework is to train a simple model for predicting the duration of a ride – similar to what we did in this module.
Q1. Downloading the data
We’ll use the same NYC taxi dataset, but instead of “Green Taxi Trip Records”, we’ll use “Yellow Taxi Trip Records”.
Download the data for January and February 2023.
Read the data for January. How many columns are there?
16
17
18
19
import pandas as pd
# Load the data for January 2023
url_january = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-01.parquet'
df_january = pd.read_parquet(url_january)
df_january
Answer Q1 : 19
Q2. Computing duration
Now let’s compute the duration variable. It should contain the duration of a ride in minutes.
What’s the standard deviation of the trips duration in January?
32.59
42.59
52.59
62.59
df_january.dtypes
import numpy as np
# The columns 'tpep_pickup_datetime' and 'tpep_dropoff_datetime' are required to calculate the duration
if 'tpep_pickup_datetime' in df_january.columns and 'tpep_dropoff_datetime' in df_january.columns:
# Compute the duration in minutes
df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60
# Compute the standard deviation of the duration
std_dev_duration = round(np.std(df_january['duration']), 2)
print("Standard Deviation of Trip Durations in January 2023:", std_dev_duration)
else:
print("The necessary columns are not present in the dataset.")
Q3. Dropping outliers
Next, we need to check the distribution of the duration variable. There are some outliers. Let’s remove them and keep only the records where the duration was between 1 and 60 minutes (inclusive).
What fraction of the records left after you dropped the outliers?
90%
92%
95%
98%
# Compute the duration in minutes
df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60
# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_filtered = df_january[(df_january['duration'] >= 1) & (df_january['duration'] <= 60)]
# Calculate the fraction of records left
fraction_left = round(len(df_filtered) / len(df_january), 2)
print("Fraction of records left after dropping outliers:", int(fraction_left * 100), "%")
Answer Q3 : 98%
Q4. One-hot encoding
Let’s apply one-hot encoding to the pickup and dropoff location IDs. We’ll use only these two features for our model.
Turn the dataframe into a list of dictionaries (remember to re-cast the ids to strings – otherwise it will label encode them)
Fit a dictionary vectorizer
Get a feature matrix from it
What’s the dimensionality of this matrix (number of columns)?
2
155
345
515
715
import pandas as pd
from sklearn.feature_extraction import DictVectorizer
# Assume df_filtered is already defined and filtered
# Ensure the columns are of integer type first (if they aren't already)
df_filtered.loc[:, 'PULocationID'] = df_filtered['PULocationID'].astype(int)
df_filtered.loc[:, 'DOLocationID'] = df_filtered['DOLocationID'].astype(int)
# Create a copy of the DataFrame and convert IDs to strings in the new DataFrame
df_filtered_str = df_filtered.copy()
df_filtered_str.loc[:, 'PULocationID'] = df_filtered_str['PULocationID'].astype(str)
df_filtered_str.loc[:, 'DOLocationID'] = df_filtered_str['DOLocationID'].astype(str)
# Turn the DataFrame into a list of dictionaries
dicts = df_filtered_str[['PULocationID', 'DOLocationID']].to_dict(orient='records')
# Fit a dictionary vectorizer
dv = DictVectorizer()
X = dv.fit_transform(dicts)
# Get the dimensionality of the feature matrix
print("Dimensionality of the feature matrix (number of columns):", X.shape[1])
Answer Q4 : 515
Q5. Training a model
Now let’s use the feature matrix from the previous step to train a model.
Train a plain linear regression model with default parameters
Calculate the RMSE of the model on the training data
What’s the RMSE on train?
3.64
7.64
11.64
16.64
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# Fit a dictionary vectorizer
dv = DictVectorizer()
X_train = dv.fit_transform(dicts)
# Prepare the target variable
y_train = df_filtered['duration'].values
# Train a linear regression model
lr = LinearRegression()
lr.fit(X_train, y_train)
# Make predictions on the training data
y_pred = lr.predict(X_train)
# Calculate the RMSE on the training data
rmse = round(np.sqrt(mean_squared_error(y_train, y_pred)), 2)
print("RMSE on training data:", rmse)
Answer Q5 : 7.64
Q6. Evaluating the model
Now let’s apply this model to the validation dataset (February 2023).
What’s the RMSE on validation?
3.81
7.81
11.81
16.81
import pandas as pd
import numpy as np
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# Load the data for January 2023
url_january = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-01.parquet'
df_january = pd.read_parquet(url_january)
# Compute the duration in minutes
df_january['duration'] = (df_january['tpep_dropoff_datetime'] - df_january['tpep_pickup_datetime']).dt.total_seconds() / 60
# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_filtered = df_january[(df_january['duration'] >= 1) & (df_january['duration'] <= 60)].copy()
# Cast IDs to string after ensuring they are of object type
df_filtered['PULocationID'] = df_filtered['PULocationID'].astype('object').astype(str)
df_filtered['DOLocationID'] = df_filtered['DOLocationID'].astype('object').astype(str)
# Turn the DataFrame into a list of dictionaries
dicts = df_filtered[['PULocationID', 'DOLocationID']].to_dict(orient='records')
# Fit a dictionary vectorizer
dv = DictVectorizer()
X_train = dv.fit_transform(dicts)
# Prepare the target variable
y_train = df_filtered['duration'].values
# Train a linear regression model
lr = LinearRegression()
lr.fit(X_train, y_train)
# Load the data for February 2023
url_february = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-02.parquet'
df_february = pd.read_parquet(url_february)
# Compute the duration in minutes
df_february['duration'] = (df_february['tpep_dropoff_datetime'] - df_february['tpep_pickup_datetime']).dt.total_seconds() / 60
# Filter the records to keep only those with duration between 1 and 60 minutes (inclusive)
df_feb_filtered = df_february[(df_february['duration'] >= 1) & (df_february['duration'] <= 60)].copy()
# Cast IDs to string after ensuring they are of object type
df_feb_filtered['PULocationID'] = df_feb_filtered['PULocationID'].astype('object').astype(str)
df_feb_filtered['DOLocationID'] = df_feb_filtered['DOLocationID'].astype('object').astype(str)
# Turn the DataFrame into a list of dictionaries
dicts_feb = df_feb_filtered[['PULocationID', 'DOLocationID']].to_dict(orient='records')
# Transform the validation data using the same dictionary vectorizer
X_val = dv.transform(dicts_feb)
# Prepare the target variable for the validation data
y_val = df_feb_filtered['duration'].values
# Make predictions on the validation data
y_pred_val = lr.predict(X_val)
# Calculate the RMSE on the validation data
rmse_val = round(np.sqrt(mean_squared_error(y_val, y_pred_val)), 2)
print("RMSE on validation data:", rmse_val)
In this homework, we're going to combine data from various sources to process it in Pandas and generate additional fields.
If not stated otherwise, please use the Colab covered at the livestream to re-use the code snippets.
Question 1: IPO Filings Web Scraping and Data Processing
What's the total sum ($m) of 2023 filings that happenned of Fridays?
Re-use the [Code Snippet 1] example to get the data from web for this endpoint: https://stockanalysis.com/ipos/filings/ Convert the 'Filing Date' to datetime(), 'Shares Offered' to float64 (if '-' is encountered, populate with NaNs). Define a new field 'Avg_price' based on the "Price Range", which equals to NaN if no price is specified, to the price (if only one number is provided), or to the average of 2 prices (if a range is given). You may be inspired by the function extract_numbers() in [Code Snippet 4], or you can write your own function to "parse" a string. Define a column "Shares_offered_value", which equals to "Shares Offered" * "Avg_price" (when both columns are defined; otherwise, it's NaN)
Find the total sum in $m (millions of USD, closest INTEGER number) for all fillings during 2023, which happened on Fridays (Date.dt.dayofweek()==4). You should see 32 records in total, 24 of it is not null.
(additional: you can read about S-1 IPO filing to understand the context)
Question 2: IPOs "Fixed days hold" strategy
Find the optimal number of days X (between 1 and 30), where 75% quantile growth is the highest?
Reuse [Code Snippet 1] to retrieve the list of IPOs from 2023 and 2024 (from URLs: https://stockanalysis.com/ipos/2023/ and https://stockanalysis.com/ipos/2024/). Get all OHLCV daily prices for all stocks with an "IPO date" before March 1, 2024 ("< 2024-03-01") - 184 tickers (without 'RYZB'). Please remove 'RYZB', as it is no longer available on Yahoo Finance.
Sometimes you may need to adjust the symbol name (e.g., 'IBAC' on stockanalysis.com -> 'IBACU' on Yahoo Finance) to locate OHLCV prices for all stocks. Some of the tickers like 'DYCQ' and 'LEGT' were on the market less than 30 days (11 and 21 days, respectively). Let's leave them in the dataset; it just means that you couldn't hold them for more days than they were listed.
Let's assume you managed to buy a new stock (listed on IPO) on the first day at the [Adj Close] price]. Your strategy is to hold for exactly X full days (where X is between 1 and 30) and sell at the "Adj. Close" price in X days (e.g., if X=1, you sell on the next day). Find X, when the 75% quantile growth (among 185 investments) is the highest.
HINTs:
You can generate 30 additional columns: growth_future_1d … growth_future_30d, join that with the table of min_dates (first day when each stock has data on Yahoo Finance), and perform vector operations on the resulting dataset.
You can use the DataFrame.describe() function to get mean, min, max, 25-50-75% quantiles.
Addtional:
You can also ensure that the mean and 50th percentile (median) investment returns are negative for most X values, implying a wager for a "lucky" investor who might be in the top 25%.
What's your recommendation: Do you suggest pursuing this strategy for an optimal X?
Question 3: Is Growth Concentrated in the Largest Stocks?
Get the share of days (percentage as int) when Large Stocks outperform (growth_7d - growth over 7 periods back) the Largest stocks?
Reuse [Code Snippet 5] to obtain OHLCV stats for 33 stocks for 10 full years of data (2014-01-01 to 2023-12-31):
You should be able to obtain stats for 33 LARGEST STOCKS and 32 LARGE STOCKS.
Calculate growth_7d for every stock and every day. Get the average daily growth_7d for the LARGEST_STOCKS group vs. the LARGE_STOCKS group.
For example, for the first of data you should have:
Date
ticker_category
growth_7d
2014-01-01
LARGE
1.011684
2014-01-01
LARGEST
1.011797
On that day, the LARGEST group was growing faster than LARGE one (new stocks).
Calculate the number of days when the LARGE GROUP (new smaller stocks) outperforms the LARGEST GROUP, divide it by the total number of trading days (which should be 2595 days), and convert it to a percentage (closest INTEGER value). For example, if you find that 1700 out of 2595 days meet this condition, it means that 1700/2595 = 0.655, or approximately 66% of days, the LARGE stocks were growing faster than the LARGEST ones. This suggests that you should consider extending your dataset with more stocks to seek higher growth.
Question 4: Trying Another Technical Indicators strategy
What's the total gross profit (in THOUSANDS of $) you'll get from trading on CCI (no fees assumption)?
First, run the entire Colab to obtain the full DataFrame of data (after [Code Snippet 9]), and truncate it to the last full 10 years of data (2014-01-01 to 2023-12-31). If you encounter any difficulties running the Colab - you can download it using this link.
Let's assume you've learned about the awesome CCI indicator (Commodity Channel Index), and decided to use only it for your operations.
You defined the "defensive" value of a high threshould of 200, and you trade only on Fridays (Date.dt.dayofweek()==4).
That is, every time you see that CCI is >200 for any stock (out of those 33), you'll invest $1000 (each record when CCI>200) at Adj.Close price and hold it for 1 week (5 trading days) in order to sell at the Adj. Close price.
What's the expected gross profit (no fees) that you get in THOUSANDS $ (closest integer value) over many operations in 10 years? One operation calculations: if you invested $1000 and received $1010 in 5 days - you add $10 to gross profit, if you received $980 - add -$20 to gross profit. You need to sum these results over all trades (460 times in 10 years).
Additional:
Add an approximate fees calculation over the 460 trades from this calculator https://www.degiro.ie/fees/calculator (Product:"Shares, USA and Canada;" Amount per transaction: "1000 EUR"; Transactions per year: "460")
are you still profitable on those trades?
[EXPLORATORY] Question 5: Finding Your Strategy for IPOs
You've seen in the first questions that the median and average investments are negative in IPOs, and you can't blindly invest in all deals.
How would you correct/refine the approach? Briefly describe the steps and the data you'll try to get (it should be generally feasible to do it from public sources - no access to internal data of companies)?
E.g. (some ideas) Do you want to focus on the specific vertical? Do you want to build a smart comparison vs. existing stocks on the market? Or you just will want to get some features (which features?) like total number of people in a company to find a segment of "successful" IPOs?
What's the total sum ($m) of 2023 filings that happenned of Fridays?
import pandas as pd
import requests
from datetime import datetime
# Define function to extract price from "Price Range" string
def extract_price(price_range):
if pd.isna(price_range):
return None
elif "-" not in price_range:
# Remove leading/trailing spaces and dollar sign ($) before conversion (handle empty string)
price_without_symbol = price_range.strip(" $")
return float(price_without_symbol) if price_without_symbol else None
else:
prices = price_range.split("-")
# Check if both prices are valid before converting and calculating average
if all(price.strip(" $") for price in prices):
return (float(prices[0].strip(" $")) + float(prices[1].strip(" $"))) / 2
else:
return None
# Read datasource
url = "https://stockanalysis.com/ipos/filings/"
response = requests.get(url)
filings_dfs = pd.read_html(response.text)
filings_df = filings_dfs[0]
# Convert 'Filing Date' to datetime
filings_df['Filing Date'] = pd.to_datetime(filings_df['Filing Date'])
# Convert 'Shares Offered' to float
filings_df['Shares Offered'] = pd.to_numeric(filings_df['Shares Offered'].str.replace(',', ''), errors='coerce')
# Define 'Avg_price' column
filings_df['Avg_price'] = filings_df['Price Range'].apply(extract_price)
# Calculate 'Shares_offered_value'
filings_df['Shares_offered_value'] = filings_df['Shares Offered'] * filings_df['Avg_price']
# Filter for 2023 filings on Fridays
friday_filings_2023 = filings_df[(filings_df['Filing Date'].dt.year == 2023) & (filings_df['Filing Date'].dt.dayofweek == 4)]
# Calculate total sum in millions (round to nearest integer)
total_sum_millions = round(friday_filings_2023['Shares_offered_value'].dropna().sum() / 1e6, 0)
print(f"Total sum of 2023 filings on Fridays ($m): {total_sum_millions}")
# Additional info - Check for expected number of records
print(f"Total number of records: {len(filings_df)}")
print(f"Number of records with non-null 'Shares_offered_value' on Fridays in 2023: {len(friday_filings_2023)}")
Total sum of 2023 filings on Fridays ($m): 286.0
Total number of records: 328 Number of records with non-null 'Shares_offered_value' on Fridays in 2023: 32
Solution of Question 2
[IPOs "Fixed days hold" strategy]
Find the optimal number of days X (between 1 and 30), where 75% quantile growth is the highest?
import pandas as pd
import requests
import yfinance as yf
from datetime import datetime, timedelta
# Retrieve IPO data for 2023 and 2024
ipo_urls = ["https://stockanalysis.com/ipos/2023/", "https://stockanalysis.com/ipos/2024/"]
ipo_dfs = []
for url in ipo_urls:
response = requests.get(url)
ipo_dfs.append(pd.read_html(response.text)[0])
ipo_df = pd.concat(ipo_dfs).reset_index(drop=True)
# Convert 'IPO Date' to datetime without warning
ipo_df['IPO Date'] = pd.to_datetime(ipo_df['IPO Date']).copy()
# Filter IPOs before March 1, 2024
ipo_df = ipo_df[ipo_df['IPO Date'] < '2024-03-01']
# Get ticker symbols
ticker_symbols = ipo_df['Symbol'].tolist()
# Remove RYZB ticker symbol
ticker_symbols.remove('RYZB')
# Adjust ticker symbols if needed
ticker_adjustments = {'IBAC': 'IBACU', 'PTHR': 'PTHRF'} # Add adjustments as needed
for i, symbol in enumerate(ticker_symbols):
if symbol in ticker_adjustments:
ticker_symbols[i] = ticker_adjustments[symbol]
# Get OHLCV data for each ticker
start_date = (datetime.now() - timedelta(days=365)).strftime('%Y-%m-%d') # 1 year of data
end_date = datetime.now().strftime('%Y-%m-%d')
prices = yf.download(ticker_symbols, start=start_date, end=end_date)
growth_data = {}
# Calculate growth for DYCQ ticker (11 days market)
holding_periods = range(1, 12)
ticker = 'DYCQ'
for days in holding_periods:
initial_price = prices['Adj Close'][ticker].iloc[0]
final_price = prices['Adj Close'][ticker].iloc[days]
growth = (final_price - initial_price) / initial_price
if (ticker, days) not in growth_data:
growth_data[(ticker, days)] = []
growth_data[(ticker, days)].append(growth)
# Calculate growth for LEGT ticker (21 days market)
holding_periods = range(1, 22)
ticker = 'LEGT'
for days in holding_periods:
initial_price = prices['Adj Close'][ticker].iloc[0]
final_price = prices['Adj Close'][ticker].iloc[days]
growth = (final_price - initial_price) / initial_price
if (ticker, days) not in growth_data:
growth_data[(ticker, days)] = []
growth_data[(ticker, days)].append(growth)
# Remove DYCQ and LEGT tickers symbol
ticker_symbols.remove('DYCQ')
ticker_symbols.remove('LEGT')
# Calculate growth for each holding period
holding_periods = range(1, 31)
for ticker in ticker_symbols:
try:
for days in holding_periods:
initial_price = prices['Adj Close'][ticker].iloc[0]
final_price = prices['Adj Close'][ticker].iloc[days]
growth = (final_price - initial_price) / initial_price
if (ticker, days) not in growth_data:
growth_data[(ticker, days)] = []
growth_data[(ticker, days)].append(growth)
except KeyError:
print(f"No data found for ticker: {ticker}")
# Create DataFrame from growth data
rows = []
for key, values in growth_data.items():
for value in values:
rows.append((key[0], key[1], value))
growth_df = pd.DataFrame(rows, columns=['Ticker', 'Days', 'Growth'])
# Calculate 75% quantile growth for each X
quantile_75 = growth_df.groupby('Days')['Growth'].quantile(0.75)
# Find X where 75% quantile growth is highest
optimal_X = quantile_75.idxmax()
highest_growth = quantile_75.max()
optimal_X, highest_growth
print ("\nThe optimal number of days X (between 1 and 30), where 75% quantile growth is the highest : ", optimal_X)
output
[*********************100%%**********************] 184 of 184 completed
The optimal number of days X (between 1 and 30), where 75% quantile growth is the highest : 29
Solution of Question 3
[Is Growth Concentrated in the Largest Stocks?]
Get the share of days (percentage as int) when Large Stocks outperform (on growth_7d) the Largest stocks?
import yfinance as yf
import pandas as pd
US_STOCKS = ['MSFT', 'AAPL', 'GOOG', 'NVDA', 'AMZN', 'META', 'BRK-B', 'LLY', 'AVGO', 'V', 'JPM']
EU_STOCKS = ['NVO', 'MC.PA', 'ASML', 'RMS.PA', 'OR.PA', 'SAP', 'ACN', 'TTE', 'SIE.DE', 'IDEXY', 'CDI.PA']
INDIA_STOCKS = ['RELIANCE.NS', 'TCS.NS', 'HDB', 'BHARTIARTL.NS', 'IBN', 'SBIN.NS', 'LICI.NS', 'INFY', 'ITC.NS', 'HINDUNILVR.NS', 'LT.NS']
largest_stock = US_STOCKS + EU_STOCKS + INDIA_STOCKS
NEW_US = ['TSLA', 'WMT', 'XOM', 'UNH', 'MA', 'PG', 'JNJ', 'MRK', 'HD', 'COST', 'ORCL']
NEW_EU = ['PRX.AS', 'CDI.PA', 'AIR.PA', 'SU.PA', 'ETN', 'SNY', 'BUD', 'DTE.DE', 'ALV.DE', 'MDT', 'AI.PA', 'EL.PA']
NEW_INDIA = ['BAJFINANCE.NS', 'MARUTI.NS', 'HCLTECH.NS', 'TATAMOTORS.NS', 'SUNPHARMA.NS', 'ONGC.NS', 'ADANIENT.NS', 'ADANIENT.NS', 'NTPC.NS', 'KOTAKBANK.NS', 'TITAN.NS']
large_stock = NEW_EU + NEW_US + NEW_INDIA
# Now all_stocks includes both LARGEST & LARGE STOCKS
all_stocks = largest_stock + large_stock
# Download OHLCV data for all stocks for 10 years
start_date = '2014-01-01'
end_date = '2023-12-31'
data_largest = yf.download(largest_stock, start=start_date, end=end_date)
data_large = yf.download(large_stock, start=start_date, end=end_date)
data = yf.download(all_stocks, start=start_date, end=end_date)
## Largest data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d_largest(data_largest):
return data_largest['Adj Close'].pct_change(7) # Calculate pct change for last 7 days
# Calculate 7-day growth for all stocks using vectorized operations
growth_7d_largest = calculate_growth_7d_largest(data_largest)
# Combine the 'growth_7d' data with the DataFrame
growth_7d_largest.columns = pd.MultiIndex.from_product([['growth_7d_largest'], growth_7d_largest.columns])
data_largest = pd.concat([data_largest, growth_7d_largest], axis=1)
## Large data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d_large(data_large):
return data_large['Adj Close'].pct_change(7) # Calculate pct change for last 7 days
# Calculate 7-day growth for all stocks using vectorized operations
growth_7d_large = calculate_growth_7d_large(data_large)
# Combine the 'growth_7d' data with the DataFrame
growth_7d_large.columns = pd.MultiIndex.from_product([['growth_7d_large'], growth_7d_large.columns])
data_large = pd.concat([data_large, growth_7d_large], axis=1)
## All data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d(data):
return data['Adj Close'].pct_change(7) # Calculate pct change for last 7 days
# Calculate 7-day growth for all stocks using vectorized operations
growth_7d = calculate_growth_7d(data)
# Combine the 'growth_7d' data with the DataFrame
growth_7d.columns = pd.MultiIndex.from_product([['growth_7d'], growth_7d.columns])
data = pd.concat([data, growth_7d], axis=1)
# Calculate average daily growth for Largest and Large stocks
try:
daily_growth_largest = data_largest['growth_7d_largest'].mean(axis=1)
daily_growth_large = data_large['growth_7d_large'].mean(axis=1)
# Calculate the number of days when Large outperforms Largest
outperformance_days = (daily_growth_large > daily_growth_largest).sum()
# Get the total number of trading days
total_days = len(daily_growth_large)
# Calculate the percentage of days Large outperforms Largest (rounded to nearest integer)
percentage_outperformance = int((outperformance_days / total_days) * 100)
print(f"\nLarge stocks outperformed Largest stocks on approximately {percentage_outperformance}% of days.")
except KeyError:
print("Error: 'growth_7d' column not found.")
output
[*********************100%%**********************] 33 of 33 completed
[*********************100%%**********************] 33 of 33 completed
[*********************100%%**********************] 65 of 65 completed
Large stocks outperformed Largest stocks on approximately 46% of days.
Solution of Question 4
See continue of Module2_Colab_Working_with_the_data.ipynb (after Code snippet 9)
Answer of Question 4 : 2
Solution of Question 5
Refining the Approach to IPO Investing Since simply investing in all IPOs isn't a winning strategy, here's how we can refine our approach:
Focus on Specific Verticals:
Data: Look for historical IPO data that includes the company's industry sector (verticals). Sources like telecommunications, financial websites or market research firms might offer this data. Analysis: Analyze which industry sectors have a higher success rate (positive returns for investors post-IPO). This could involve calculating average returns by sector over a specific period.
Smart Comparison with Existing Stocks:
Data: We'll need financial data for both IPO companies and their established competitors. This data could include metrics like Price-to-Earnings (P/E) ratio, revenue growth, and market capitalization. Public financial databases or financial news websites can be sources for this data. Analysis: Compare the valuation (e.g., P/E ratio) of the IPO company to its established competitors. Look for companies with a strong track record, but a lower valuation than their peers. This might indicate higher future growth potential for the IPO.
Focus on Company Fundamentals (beyond size):
Data: Look for IPO prospectuses, financial statements, and news articles. These can provide insights into the company's: Revenue and Profit Growth: Look for companies with consistent and sustainable growth in revenue and profits. Market Opportunity: Analyze the total addressable market (TAM) for the company's product or service. A large and growing TAM indicates significant potential for future growth. Competitive Advantage: Does the company have a strong moat, a competitive edge that protects it from rivals? This could be brand recognition, intellectual property, or a unique technology. Management Team: Evaluate the experience and track record of the management team. A strong leadership team is crucial for navigating future challenges and growth. Bypassing "Total Number of People":
While the total number of employees might be interesting, it's not a strong indicator of a successful IPO. Instead, focusing on the quality and experience of the team is more relevant.