Monday, April 29, 2024

Module 2 - Working with Data in Pandas (Dataframe Analysis)

 

Source: 

1. [Stock Markets Analytics Zoomcamp] Module2 "Working with Data in Pandas" (youtube.com)

2. https://docs.google.com/presentation/d/e/2PACX-1vT5XMStGsWf5tQkt-ulyk4MmWoSXTP4PqglHsrzGIlpd_cQ7nAzxNJVmUS7L67vAbYybZhxMNGZy-kY/pub?start=false&loop=false&delayms=3000

Module 2 Homework

In this homework, we're going to combine data from various sources to process it in Pandas and generate additional fields.

If not stated otherwise, please use the Colab covered at the livestream to re-use the code snippets.


Question 1: IPO Filings Web Scraping and Data Processing

What's the total sum ($m) of 2023 filings that happenned of Fridays?

Re-use the [Code Snippet 1] example to get the data from web for this endpoint: https://stockanalysis.com/ipos/filings/
Convert the 'Filing Date' to datetime(), 'Shares Offered' to float64 (if '-' is encountered, populate with NaNs).
Define a new field 'Avg_price' based on the "Price Range", which equals to NaN if no price is specified, to the price (if only one number is provided), or to the average of 2 prices (if a range is given).
You may be inspired by the function extract_numbers() in [Code Snippet 4], or you can write your own function to "parse" a string.
Define a column "Shares_offered_value", which equals to "Shares Offered" * "Avg_price" (when both columns are defined; otherwise, it's NaN)

Find the total sum in $m (millions of USD, closest INTEGER number) for all fillings during 2023, which happened on Fridays (Date.dt.dayofweek()==4). You should see 32 records in total, 24 of it is not null.

(additional: you can read about S-1 IPO filing to understand the context)


Question 2: IPOs "Fixed days hold" strategy

Find the optimal number of days X (between 1 and 30), where 75% quantile growth is the highest?

Reuse [Code Snippet 1] to retrieve the list of IPOs from 2023 and 2024 (from URLs: https://stockanalysis.com/ipos/2023/ and https://stockanalysis.com/ipos/2024/).
Get all OHLCV daily prices for all stocks with an "IPO date" before March 1, 2024 ("< 2024-03-01") - 184 tickers (without 'RYZB'). Please remove 'RYZB', as it is no longer available on Yahoo Finance.

Sometimes you may need to adjust the symbol name (e.g., 'IBAC' on stockanalysis.com -> 'IBACU' on Yahoo Finance) to locate OHLCV prices for all stocks.
Some of the tickers like 'DYCQ' and 'LEGT' were on the market less than 30 days (11 and 21 days, respectively). Let's leave them in the dataset; it just means that you couldn't hold them for more days than they were listed.

Let's assume you managed to buy a new stock (listed on IPO) on the first day at the [Adj Close] price]. Your strategy is to hold for exactly X full days (where X is between 1 and 30) and sell at the "Adj. Close" price in X days (e.g., if X=1, you sell on the next day).
Find X, when the 75% quantile growth (among 185 investments) is the highest.

HINTs:

  • You can generate 30 additional columns: growth_future_1d … growth_future_30d, join that with the table of min_dates (first day when each stock has data on Yahoo Finance), and perform vector operations on the resulting dataset.
  • You can use the DataFrame.describe() function to get mean, min, max, 25-50-75% quantiles.

Addtional:

  • You can also ensure that the mean and 50th percentile (median) investment returns are negative for most X values, implying a wager for a "lucky" investor who might be in the top 25%.
  • What's your recommendation: Do you suggest pursuing this strategy for an optimal X?

Question 3: Is Growth Concentrated in the Largest Stocks?

Get the share of days (percentage as int) when Large Stocks outperform (growth_7d - growth over 7 periods back) the Largest stocks?

Reuse [Code Snippet 5] to obtain OHLCV stats for 33 stocks
for 10 full years of data (2014-01-01 to 2023-12-31):

US_STOCKS = ['MSFT', 'AAPL', 'GOOG', 'NVDA', 'AMZN', 'META', 'BRK-B', 'LLY', 'AVGO','V', 'JPM']

EU_STOCKS = ['NVO','MC.PA', 'ASML', 'RMS.PA', 'OR.PA', 'SAP', 'ACN', 'TTE', 'SIE.DE','IDEXY','CDI.PA']

INDIA_STOCKS = ['RELIANCE.NS','TCS.NS','HDB','BHARTIARTL.NS','IBN','SBIN.NS','LICI.NS','INFY','ITC.NS','HINDUNILVR.NS','LT.NS']

LARGEST_STOCKS = US_STOCKS + EU_STOCKS + INDIA_STOCKS

Now let's add the top 12-22 stocks (as of end-April 2024):

NEW_US = ['TSLA','WMT','XOM','UNH','MA','PG','JNJ','MRK','HD','COST','ORCL']

NEW_EU = ['PRX.AS','CDI.PA','AIR.PA','SU.PA','ETN','SNY','BUD','DTE.DE','ALV.DE','MDT','AI.PA','EL.PA']

NEW_INDIA = ['BAJFINANCE.NS','MARUTI.NS','HCLTECH.NS','TATAMOTORS.NS','SUNPHARMA.NS','ONGC.NS','ADANIENT.NS','ADANIENT.NS','NTPC.NS','KOTAKBANK.NS','TITAN.NS']

LARGE_STOCKS = NEW_EU + NEW_US + NEW_INDIA

You should be able to obtain stats for 33 LARGEST STOCKS and 32 LARGE STOCKS.

Calculate growth_7d for every stock and every day.
Get the average daily growth_7d for the LARGEST_STOCKS group vs. the LARGE_STOCKS group.

For example, for the first of data you should have:

Dateticker_categorygrowth_7d
2014-01-01LARGE1.011684
2014-01-01LARGEST1.011797

On that day, the LARGEST group was growing faster than LARGE one (new stocks).

Calculate the number of days when the LARGE GROUP (new smaller stocks) outperforms the LARGEST GROUP, divide it by the total number of trading days (which should be 2595 days), and convert it to a percentage (closest INTEGER value). For example, if you find that 1700 out of 2595 days meet this condition, it means that 1700/2595 = 0.655, or approximately 66% of days, the LARGE stocks were growing faster than the LARGEST ones. This suggests that you should consider extending your dataset with more stocks to seek higher growth.


Question 4: Trying Another Technical Indicators strategy

What's the total gross profit (in THOUSANDS of $) you'll get from trading on CCI (no fees assumption)?

First, run the entire Colab to obtain the full DataFrame of data (after [Code Snippet 9]), and truncate it to the last full 10 years of data (2014-01-01 to 2023-12-31).
If you encounter any difficulties running the Colab - you can download it using this link.

Let's assume you've learned about the awesome CCI indicator (Commodity Channel Index), and decided to use only it for your operations.

You defined the "defensive" value of a high threshould of 200, and you trade only on Fridays (Date.dt.dayofweek()==4).

That is, every time you see that CCI is >200 for any stock (out of those 33), you'll invest $1000 (each record when CCI>200) at Adj.Close price and hold it for 1 week (5 trading days) in order to sell at the Adj. Close price.

What's the expected gross profit (no fees) that you get in THOUSANDS $ (closest integer value) over many operations in 10 years?
One operation calculations: if you invested $1000 and received $1010 in 5 days - you add $10 to gross profit, if you received $980 - add -$20 to gross profit.
You need to sum these results over all trades (460 times in 10 years).

Additional:

  • Add an approximate fees calculation over the 460 trades from this calculator https://www.degiro.ie/fees/calculator (Product:"Shares, USA and Canada;" Amount per transaction: "1000 EUR"; Transactions per year: "460")
  • are you still profitable on those trades?

[EXPLORATORY] Question 5: Finding Your Strategy for IPOs

You've seen in the first questions that the median and average investments are negative in IPOs, and you can't blindly invest in all deals.

How would you correct/refine the approach? Briefly describe the steps and the data you'll try to get (it should be generally feasible to do it from public sources - no access to internal data of companies)?

E.g. (some ideas) Do you want to focus on the specific vertical? Do you want to build a smart comparison vs. existing stocks on the market? Or you just will want to get some features (which features?) like total number of people in a company to find a segment of "successful" IPOs?


Submitting the solutions

Form for submitting:

https://courses.datatalks.club/sma-zoomcamp-2024/homework/hw02


Solution of Question 1

[IPO Filings Web Scraping and Data Processing]

What's the total sum ($m) of 2023 filings that happenned of Fridays?

import pandas as pd
import requests
from datetime import datetime

# Define function to extract price from "Price Range" string
def extract_price(price_range):
  if pd.isna(price_range):
    return None
  elif "-" not in price_range:
    # Remove leading/trailing spaces and dollar sign ($) before conversion (handle empty string)
    price_without_symbol = price_range.strip(" $")
    return float(price_without_symbol) if price_without_symbol else None
  else:
    prices = price_range.split("-")
    # Check if both prices are valid before converting and calculating average
    if all(price.strip(" $") for price in prices):
      return (float(prices[0].strip(" $")) + float(prices[1].strip(" $"))) / 2
    else:
      return None

# Read datasource
url = "https://stockanalysis.com/ipos/filings/"
response = requests.get(url)
filings_dfs = pd.read_html(response.text)
filings_df = filings_dfs[0]

# Convert 'Filing Date' to datetime
filings_df['Filing Date'] = pd.to_datetime(filings_df['Filing Date'])

# Convert 'Shares Offered' to float
filings_df['Shares Offered'] = pd.to_numeric(filings_df['Shares Offered'].str.replace(',', ''), errors='coerce')

# Define 'Avg_price' column
filings_df['Avg_price'] = filings_df['Price Range'].apply(extract_price)

# Calculate 'Shares_offered_value'
filings_df['Shares_offered_value'] = filings_df['Shares Offered'] * filings_df['Avg_price']

# Filter for 2023 filings on Fridays
friday_filings_2023 = filings_df[(filings_df['Filing Date'].dt.year == 2023) & (filings_df['Filing Date'].dt.dayofweek == 4)]

# Calculate total sum in millions (round to nearest integer)
total_sum_millions = round(friday_filings_2023['Shares_offered_value'].dropna().sum() / 1e6, 0)

print(f"Total sum of 2023 filings on Fridays ($m): {total_sum_millions}")

# Additional info - Check for expected number of records
print(f"Total number of records: {len(filings_df)}")
print(f"Number of records with non-null 'Shares_offered_value' on Fridays in 2023: {len(friday_filings_2023)}")

Total sum of 2023 filings on Fridays ($m): 286.0

Total number of records: 328 Number of records with non-null 'Shares_offered_value' on Fridays in 2023: 32

Solution of Question 2

[IPOs "Fixed days hold" strategy]

Find the optimal number of days X (between 1 and 30), where 75% quantile growth is the highest?




import pandas as pd
import requests
import yfinance as yf
from datetime import datetime, timedelta

# Retrieve IPO data for 2023 and 2024
ipo_urls = ["https://stockanalysis.com/ipos/2023/", "https://stockanalysis.com/ipos/2024/"]
ipo_dfs = []

for url in ipo_urls:
    response = requests.get(url)
    ipo_dfs.append(pd.read_html(response.text)[0])

ipo_df = pd.concat(ipo_dfs).reset_index(drop=True)

# Convert 'IPO Date' to datetime without warning
ipo_df['IPO Date'] = pd.to_datetime(ipo_df['IPO Date']).copy()

# Filter IPOs before March 1, 2024
ipo_df = ipo_df[ipo_df['IPO Date'] < '2024-03-01']

# Get ticker symbols
ticker_symbols = ipo_df['Symbol'].tolist()

# Remove RYZB ticker symbol
ticker_symbols.remove('RYZB')

# Adjust ticker symbols if needed
ticker_adjustments = {'IBAC': 'IBACU', 'PTHR': 'PTHRF'} # Add adjustments as needed
for i, symbol in enumerate(ticker_symbols):
    if symbol in ticker_adjustments:
        ticker_symbols[i] = ticker_adjustments[symbol]


# Get OHLCV data for each ticker
start_date = (datetime.now() - timedelta(days=365)).strftime('%Y-%m-%d')  # 1 year of data
end_date = datetime.now().strftime('%Y-%m-%d')

prices = yf.download(ticker_symbols, start=start_date, end=end_date)

growth_data = {}

# Calculate growth for DYCQ ticker (11 days market)
holding_periods = range(1, 12)
ticker = 'DYCQ'
for days in holding_periods:
  initial_price = prices['Adj Close'][ticker].iloc[0]
  final_price = prices['Adj Close'][ticker].iloc[days]
  growth = (final_price - initial_price) / initial_price
  if (ticker, days) not in growth_data:
    growth_data[(ticker, days)] = []
    growth_data[(ticker, days)].append(growth)

# Calculate growth for LEGT ticker (21 days market)
holding_periods = range(1, 22)
ticker = 'LEGT'
for days in holding_periods:
  initial_price = prices['Adj Close'][ticker].iloc[0]
  final_price = prices['Adj Close'][ticker].iloc[days]
  growth = (final_price - initial_price) / initial_price
  if (ticker, days) not in growth_data:
    growth_data[(ticker, days)] = []
    growth_data[(ticker, days)].append(growth)

# Remove DYCQ and LEGT tickers symbol
ticker_symbols.remove('DYCQ')
ticker_symbols.remove('LEGT')

# Calculate growth for each holding period
holding_periods = range(1, 31)
for ticker in ticker_symbols:
    try:
        for days in holding_periods:
            initial_price = prices['Adj Close'][ticker].iloc[0]
            final_price = prices['Adj Close'][ticker].iloc[days]
            growth = (final_price - initial_price) / initial_price
            if (ticker, days) not in growth_data:
                growth_data[(ticker, days)] = []
            growth_data[(ticker, days)].append(growth)
    except KeyError:
        print(f"No data found for ticker: {ticker}")

# Create DataFrame from growth data
rows = []
for key, values in growth_data.items():
    for value in values:
        rows.append((key[0], key[1], value))
growth_df = pd.DataFrame(rows, columns=['Ticker', 'Days', 'Growth'])

# Calculate 75% quantile growth for each X
quantile_75 = growth_df.groupby('Days')['Growth'].quantile(0.75)

# Find X where 75% quantile growth is highest
optimal_X = quantile_75.idxmax()
highest_growth = quantile_75.max()
optimal_X, highest_growth

print ("\nThe optimal number of days X (between 1 and 30), where 75% quantile growth is the highest : ", optimal_X)

output
[*********************100%%**********************]  184 of 184 completed

The optimal number of days X (between 1 and 30), where 75% quantile growth is the highest :  29

Solution of Question 3

[Is Growth Concentrated in the Largest Stocks?]

Get the share of days (percentage as int) when Large Stocks outperform (on growth_7d) the Largest stocks?



import yfinance as yf
import pandas as pd

US_STOCKS = ['MSFT', 'AAPL', 'GOOG', 'NVDA', 'AMZN', 'META', 'BRK-B', 'LLY', 'AVGO', 'V', 'JPM']
EU_STOCKS = ['NVO', 'MC.PA', 'ASML', 'RMS.PA', 'OR.PA', 'SAP', 'ACN', 'TTE', 'SIE.DE', 'IDEXY', 'CDI.PA']
INDIA_STOCKS = ['RELIANCE.NS', 'TCS.NS', 'HDB', 'BHARTIARTL.NS', 'IBN', 'SBIN.NS', 'LICI.NS', 'INFY', 'ITC.NS', 'HINDUNILVR.NS', 'LT.NS']

largest_stock = US_STOCKS + EU_STOCKS + INDIA_STOCKS

NEW_US = ['TSLA', 'WMT', 'XOM', 'UNH', 'MA', 'PG', 'JNJ', 'MRK', 'HD', 'COST', 'ORCL']
NEW_EU = ['PRX.AS', 'CDI.PA', 'AIR.PA', 'SU.PA', 'ETN', 'SNY', 'BUD', 'DTE.DE', 'ALV.DE', 'MDT', 'AI.PA', 'EL.PA']
NEW_INDIA = ['BAJFINANCE.NS', 'MARUTI.NS', 'HCLTECH.NS', 'TATAMOTORS.NS', 'SUNPHARMA.NS', 'ONGC.NS', 'ADANIENT.NS', 'ADANIENT.NS', 'NTPC.NS', 'KOTAKBANK.NS', 'TITAN.NS']

large_stock = NEW_EU + NEW_US + NEW_INDIA

# Now all_stocks includes both LARGEST & LARGE STOCKS
all_stocks = largest_stock + large_stock

# Download OHLCV data for all stocks for 10 years
start_date = '2014-01-01'
end_date = '2023-12-31'
data_largest = yf.download(largest_stock, start=start_date, end=end_date)
data_large = yf.download(large_stock, start=start_date, end=end_date)
data = yf.download(all_stocks, start=start_date, end=end_date)

## Largest data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d_largest(data_largest):
    return data_largest['Adj Close'].pct_change(7)  # Calculate pct change for last 7 days

# Calculate 7-day growth for all stocks using vectorized operations
growth_7d_largest = calculate_growth_7d_largest(data_largest)

# Combine the 'growth_7d' data with the DataFrame
growth_7d_largest.columns = pd.MultiIndex.from_product([['growth_7d_largest'], growth_7d_largest.columns])
data_largest = pd.concat([data_largest, growth_7d_largest], axis=1)

## Large data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d_large(data_large):
    return data_large['Adj Close'].pct_change(7)  # Calculate pct change for last 7 days

# Calculate 7-day growth for all stocks using vectorized operations
growth_7d_large = calculate_growth_7d_large(data_large)

# Combine the 'growth_7d' data with the DataFrame
growth_7d_large.columns = pd.MultiIndex.from_product([['growth_7d_large'], growth_7d_large.columns])
data_large = pd.concat([data_large, growth_7d_large], axis=1)

## All data
# Calculate daily growth for the last 7 days for each stock
def calculate_growth_7d(data):
    return data['Adj Close'].pct_change(7)  # Calculate pct change for last 7 days

# Calculate 7-day growth for all stocks using vectorized operations
growth_7d = calculate_growth_7d(data)

# Combine the 'growth_7d' data with the DataFrame
growth_7d.columns = pd.MultiIndex.from_product([['growth_7d'], growth_7d.columns])
data = pd.concat([data, growth_7d], axis=1)


# Calculate average daily growth for Largest and Large stocks
try:
    daily_growth_largest = data_largest['growth_7d_largest'].mean(axis=1)
    daily_growth_large = data_large['growth_7d_large'].mean(axis=1)

    # Calculate the number of days when Large outperforms Largest
    outperformance_days = (daily_growth_large > daily_growth_largest).sum()

    # Get the total number of trading days
    total_days = len(daily_growth_large)

    # Calculate the percentage of days Large outperforms Largest (rounded to nearest integer)
    percentage_outperformance = int((outperformance_days / total_days) * 100)
    print(f"\nLarge stocks outperformed Largest stocks on approximately {percentage_outperformance}% of days.")

except KeyError:
    print("Error: 'growth_7d' column not found.")

output

[*********************100%%**********************]  33 of 33 completed
[*********************100%%**********************]  33 of 33 completed
[*********************100%%**********************]  65 of 65 completed
Large stocks outperformed Largest stocks on approximately 46% of days.

Solution of Question 4

See continue of Module2_Colab_Working_with_the_data.ipynb (after Code snippet 9)

Answer of Question 4 : 2


Solution of Question 5


Refining the Approach to IPO Investing Since simply investing in all IPOs isn't a winning strategy, here's how we can refine our approach:

  1. Focus on Specific Verticals:

Data: Look for historical IPO data that includes the company's industry sector (verticals). Sources like telecommunications, financial websites or market research firms might offer this data. Analysis: Analyze which industry sectors have a higher success rate (positive returns for investors post-IPO). This could involve calculating average returns by sector over a specific period.

  1. Smart Comparison with Existing Stocks:

Data: We'll need financial data for both IPO companies and their established competitors. This data could include metrics like Price-to-Earnings (P/E) ratio, revenue growth, and market capitalization. Public financial databases or financial news websites can be sources for this data. Analysis: Compare the valuation (e.g., P/E ratio) of the IPO company to its established competitors. Look for companies with a strong track record, but a lower valuation than their peers. This might indicate higher future growth potential for the IPO.

  1. Focus on Company Fundamentals (beyond size):

Data: Look for IPO prospectuses, financial statements, and news articles. These can provide insights into the company's: Revenue and Profit Growth: Look for companies with consistent and sustainable growth in revenue and profits. Market Opportunity: Analyze the total addressable market (TAM) for the company's product or service. A large and growing TAM indicates significant potential for future growth. Competitive Advantage: Does the company have a strong moat, a competitive edge that protects it from rivals? This could be brand recognition, intellectual property, or a unique technology. Management Team: Evaluate the experience and track record of the management team. A strong leadership team is crucial for navigating future challenges and growth. Bypassing "Total Number of People":

While the total number of employees might be interesting, it's not a strong indicator of a successful IPO. Instead, focusing on the quality and experience of the team is more relevant.


Wednesday, April 17, 2024

Module 1 : Introduction and Data Sources

 

Stock Markets Analytics Zoomcamp

Reference : 

1. Youtube : 

2. Slides :

Module01_Colab_Introduction_and_Data_SourcesL.ipynb

1
2
# install main library YFinance
!pip install yfinance
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# IMPORTS
import numpy as np
import pandas as pd
 
#Fin Data Sources
import yfinance as yf
import pandas_datareader as pdr
 
#Data viz
import plotly.graph_objs as go
import plotly.express as px
 
import time
from datetime import date
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import pandas as pd
 
# Read dataset
df = pd.read_csv("GDPC1.csv")
 
# Assuming your dataframe is named df
# Convert DATE column to datetime if it's not already in datetime format
df['DATE'] = pd.to_datetime(df['DATE'])
 
# Shift GDPC1 column by 4 rows to get value from one year ago
df['GDPC1_one_year_ago'] = df['GDPC1'].shift(4)
 
# Calculate YoY growth rate
df['YoY_growth'] = df['GDPC1'] / df['GDPC1_one_year_ago'] - 1
 
# Filter dataframe to include only rows from 2023
df_2023 = df[df['DATE'].dt.year == 2023]
 
# Calculate average YoY growth rate for 2023
average_YoY_growth_2023 = df_2023['YoY_growth'].mean()
 
# Round to 1 decimal point
average_YoY_growth_2023_rounded = round(average_YoY_growth_2023 * 100, 1)
 
print("Average YoY growth rate in 2023: {}%".format(average_YoY_growth_2023_rounded))
Average YoY growth rate in 2023: 2.5%

1) Understanding Data-Driven Decisions data pulls

1
2
3
4
5
end = date.today()
print(f'Year = {end.year}; month= {end.month}; day={end.day}')
 
start = date(year=end.year-70, month=end.month, day=end.day)
print(f'Period for indexes: {start} to {end} ')
Year = 2024; month= 4; day=17
Period for indexes: 1954-04-17 to 2024-04-17 
1.1) GDP
1
2
3
4
5
6
7
# Real Potential Gross Domestic Product (GDPPOT), Billions of Chained 2012 Dollars, QUARTERLY
gdppot = pdr.DataReader("GDPPOT", "fred", start=start)
 
gdppot['gdppot_us_yoy'] = gdppot.GDPPOT/gdppot.GDPPOT.shift(4)-1
gdppot['gdppot_us_qoq'] = gdppot.GDPPOT/gdppot.GDPPOT.shift(4)-1
gdppot.tail()
GDPPOTgdppot_us_yoygdppot_us_qoq
DATE
2023-04-0122246.230.0213120.021312
2023-07-0122369.710.0217190.021719
2023-10-0122495.190.0221900.022190
2024-01-0122618.150.0222510.022251
2024-04-0122739.000.0221510.022151
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Visuals GDPPOT
 
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
 
fig, ax = plt.subplots(figsize=(20, 6))
plt.grid(True)
 
# Plotting area under US potential GDP curve
ax.fill_between(gdppot.index, gdppot.GDPPOT, color="red", alpha=0.3, label="US Potential GDP")
 
# Creating a secondary y-axis for GDP growth percentage
ax2 = ax.twinx()
ax2.yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
ax2.plot(gdppot.gdppot_us_yoy, color="blue", marker="o", label="US Potential GDP Growth, % Y/Y")
 
# Setting labels and title
ax.set_xlabel("Date", fontsize=14)
ax.set_ylabel("US Potential GDP, $b", color="red", fontsize=14)
ax2.set_ylabel("US Potential GDP Growth, % Y/Y", color="blue", fontsize=14)
 
# Adding legend
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc='upper left')
 
plt.show()
1.2) INFLATION – CPI CORE
1
2
3
4
5
6
7
8
9
10
11
# # "Core CPI index", MONTHLY
# The "Consumer Price Index for All Urban Consumers: All Items Less Food & Energy"
# is an aggregate of prices paid by urban consumers for a typical basket of goods, excluding food and energy.
# This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
cpilfesl = pdr.DataReader("CPILFESL", "fred", start=start)
 
cpilfesl['cpi_core_yoy'] = cpilfesl.CPILFESL/cpilfesl.CPILFESL.shift(12)-1
cpilfesl['cpi_core_mom'] = cpilfesl.CPILFESL/cpilfesl.CPILFESL.shift(1)-1
 
cpilfesl.tail(13)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
fig, ax = plt.subplots(figsize=(20, 6))
plt.grid(True)
 
# Plotting area under CPI
ax.fill_between(cpilfesl.index, cpilfesl.CPILFESL, color="red", alpha=0.3, label="Core CPI index (monthly)")
 
# Creating a secondary y-axis for CPI growth percentage
ax2 = ax.twinx()
ax2.yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
ax2.plot(cpilfesl.cpi_core_yoy, color="blue", marker="o", label="Core CPI index (monthly) Growth, % Y/Y")
 
# Setting labels and title
ax.set_xlabel("Date", fontsize=14)
ax.set_ylabel("Core CPI index (monthly)", color="red", fontsize=14)
ax2.set_ylabel("Core CPI index Growth, % Y/Y", color="blue", fontsize=14)
 
# Adding legend
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc='upper left')
 
plt.show()
1.3 INTEREST RATES
1
2
3
fedfunds = pdr.DataReader("FEDFUNDS", "fred", start=start)
fedfunds.tail()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Fed Funds
fig, ax = plt.subplots(figsize=(20, 6))
plt.grid(True)
 
# Plotting area under US potential GDP curve
# ax.fill_between(fedfunds.index, fedfunds.FEDFUNDS, color="red", alpha=0.3, label="Core CPI index (monthly)")
 
# # Creating a secondary y-axis for GDP growth percentage
# ax2 = ax.twinx()
ax.yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
ax.plot(fedfunds.index, fedfunds.FEDFUNDS/100, marker="o", label="Fed Funds Rate")
 
# Setting labels and title
ax.set_xlabel("Date", fontsize=14)
ax.set_ylabel("Fed Funds Rate", color="blue", fontsize=14)
 
# Adding legend
lines, labels = ax.get_legend_handles_labels()
ax.legend(lines, labels, loc='upper left')
 
plt.show()
1
2
3
dgs1 = pdr.DataReader("DGS1", "fred", start=start)
dgs1.tail()

Other rates for US Treasury: https://fred.stlouisfed.org/categories/115

1
2
3
dgs5 = pdr.DataReader("DGS5", "fred", start=start)
dgs5.tail()
1.4 SNP500
1
2
3
4
5
# Other indexes: https://stooq.com/t/
 
# SPX= S&P500
spx_index = pdr.get_data_stooq('^SPX', start, end)
spx_index.head()
1
2
3
4
5
# 252 trading days a year
  # REVERSE ORDER OF THE DATA!!!
spx_index['spx_dod'] = (spx_index.Close/spx_index.Close.shift(-1)-1)
spx_index['spx_qoq'] = (spx_index.Close/spx_index.Close.shift(-63)-1)
spx_index['spx_yoy'] = (spx_index.Close/spx_index.Close.shift(-252)-1)
1
spx_index.head()
1
spx_truncated = spx_index[spx_index.index>='1990-01-01']
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# S&P500 abs. vs. relative growth
fig, ax = plt.subplots(figsize=(20, 6))
plt.grid(True)
 
# Plotting area under CPI
ax.fill_between(spx_truncated.index, spx_truncated.Close, color="red", alpha=0.3, label="S&P 500 Absolute Value (Close price)")
 
# Creating a secondary y-axis for CPI growth percentage
ax2 = ax.twinx()
ax2.yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
ax2.plot(spx_truncated.spx_yoy,
         color="blue",
        #  marker=".",
         label="Year-over-Year Growth (%)")
 
# Setting labels and title
ax.set_xlabel("Date", fontsize=14)
ax.set_ylabel("S&P 500 Absolute Value (Close price", color="red", fontsize=14)
ax2.set_ylabel("Year-over-Year Growth (%)", color="blue", fontsize=14)
 
# Adding legend
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc='upper left')
 
plt.show()

2) Data Sources for Stocks

2.1 OHLCV DATA DAILY – INDEXES
1
2
3
4
5
6
# INDEXES from Yahoo Finance
# DAX index (XETRA - XETRA Delayed Price. Currency in EUR)
dax_daily = yf.download(tickers = "^GDAXI",
                     period = "max",
                     interval = "1d")
1
dax_daily.tail()
1
2
3
# normally 252 trading days
dax_daily['adj_close_last_year'] = dax_daily['Adj Close'].shift(252)
dax_daily['yoy_growth'] = dax_daily['Adj Close'] / dax_daily['adj_close_last_year'] -1
1
dax_daily
1
dax_daily['Adj Close'].plot.line()
1
2
3
4
5
# S&P 500 INDEX : Chicago Options - Chicago Options Delayed Price. Currency in USD
snp500_daily = yf.download(tickers = "^SPX",
                     period = "max",
                     interval = "1d")
1
snp500_daily.tail()
1
2
3
4
5
# SNP - SNP Real Time Price. Currency in USD
snp500_daily_non_delayed = yf.download(tickers = "^GSPC",
                     period = "max",
                     interval = "1d")
1
snp500_daily_non_delayed.tail()
1
2
3
4
dji_daily = yf.download(tickers = "^DJI",
                     period = "max",
                     interval = "1d")
2.2 OHLCV DATA DAILY – ETFS
1
2
3
4
voo_etf = yf.download(tickers = "VOO",
                     period = "max",
                     interval = "1d")
1
voo_etf.tail()
1
2
3
4
5
6
7
# ETFs
# WisdomTree India Earnings Fund (EPI)
# NYSEArca - Nasdaq Real Time Price. Currency in USD
epi_etf_daily = yf.download(tickers = "EPI",
                     period = "max",
                     interval = "1d")
1
2
epi_etf_daily.head()
print(epi_etf_daily.shape)
(4063, 6)
1
2
# find dividends impact on Close vs. Adj.Close
epi_etf_daily[(epi_etf_daily.index >='2023-06-23') & (epi_etf_daily.index <='2023-06-28')]
1
2
3
4
# find dividends - diff for Close vs. Adj Close
# Open/Close for 06-25 diff is close to divs = 1.845 (~1.58 for Open and 1.3 for Close)
# HELP: https://help.yahoo.com/kb/SLN28256.html#:~:text=Adjusted%20close%20is%20the%20closing,Security%20Prices%20(CRSP)%20standards.
epi_etf_daily[(epi_etf_daily.index >='2022-06-23') & (epi_etf_daily.index <='2022-06-28')]
1
2
3
 
epi_etf_daily['Close'].plot(title="EPI's etf stock price")
1
2
3
# get actions, incl. dividends - as a dataFrame
epi = yf.Ticker('EPI')
epi.get_actions()
DividendsStock SplitsCapital Gains
Date
2008-12-22 00:00:00-05:000.0910.00.0
2009-03-23 00:00:00-04:000.0070.00.0
2009-06-22 00:00:00-04:000.0020.00.0
2009-09-21 00:00:00-04:000.0450.00.0
2009-12-21 00:00:00-05:000.0060.00.0
2010-06-28 00:00:00-04:000.0650.00.0
2010-09-20 00:00:00-04:000.0650.00.0
2010-12-22 00:00:00-05:000.0130.00.0
2011-06-22 00:00:00-04:000.0620.00.0
2011-09-26 00:00:00-04:000.0980.00.0
2012-03-26 00:00:00-04:000.0100.00.0
2012-06-25 00:00:00-04:000.0450.00.0
2012-09-24 00:00:00-04:000.0830.00.0
2012-12-24 00:00:00-05:000.0240.00.0
2013-06-24 00:00:00-04:000.0680.00.0
2013-09-23 00:00:00-04:000.0630.00.0
2014-03-24 00:00:00-04:000.0990.00.0
2014-06-23 00:00:00-04:000.0590.00.0
2014-09-22 00:00:00-04:000.0680.00.0
2015-03-23 00:00:00-04:000.0310.00.0
2015-06-22 00:00:00-04:000.0670.00.0
2015-09-21 00:00:00-04:000.1160.00.0
2015-12-21 00:00:00-05:000.0240.00.0
2016-03-21 00:00:00-04:000.0100.00.0
2016-06-20 00:00:00-04:000.1250.00.0
2016-09-26 00:00:00-04:000.0700.00.0
2016-12-23 00:00:00-05:000.0060.00.0
2017-03-27 00:00:00-04:000.0750.00.0
2017-06-26 00:00:00-04:000.0550.00.0
2017-09-26 00:00:00-04:000.0800.00.0
2017-12-26 00:00:00-05:000.0260.00.0
2018-03-20 00:00:00-04:000.0610.00.0
2018-06-25 00:00:00-04:000.0850.00.0
2018-09-25 00:00:00-04:000.1110.00.0
2018-12-24 00:00:00-05:000.0080.00.0
2018-12-28 00:00:00-05:000.0280.00.0
2019-03-26 00:00:00-04:000.1200.00.0
2019-09-24 00:00:00-04:000.1300.00.0
2019-12-23 00:00:00-05:000.0410.00.0
2020-03-24 00:00:00-04:000.0170.00.0
2020-06-23 00:00:00-04:000.0940.00.0
2020-09-22 00:00:00-04:000.0350.00.0
2020-12-21 00:00:00-05:000.0810.00.0
2021-03-25 00:00:00-04:000.0430.00.0
2021-06-24 00:00:00-04:000.0850.00.0
2021-09-24 00:00:00-04:000.1100.00.0
2021-12-27 00:00:00-05:000.1920.00.0
2022-03-25 00:00:00-04:000.1150.00.0
2022-06-24 00:00:00-04:001.8450.00.0
2023-06-26 00:00:00-04:000.0600.00.0
1
2
# get dividends as Series
epi.get_dividends()
Date
2008-12-22 00:00:00-05:00    0.091
2009-03-23 00:00:00-04:00    0.007
2009-06-22 00:00:00-04:00    0.002
2009-09-21 00:00:00-04:00    0.045
2009-12-21 00:00:00-05:00    0.006
2010-06-28 00:00:00-04:00    0.065
2010-09-20 00:00:00-04:00    0.065
2010-12-22 00:00:00-05:00    0.013
2011-06-22 00:00:00-04:00    0.062
2011-09-26 00:00:00-04:00    0.098
2012-03-26 00:00:00-04:00    0.010
2012-06-25 00:00:00-04:00    0.045
2012-09-24 00:00:00-04:00    0.083
2012-12-24 00:00:00-05:00    0.024
2013-06-24 00:00:00-04:00    0.068
2013-09-23 00:00:00-04:00    0.063
2014-03-24 00:00:00-04:00    0.099
2014-06-23 00:00:00-04:00    0.059
2014-09-22 00:00:00-04:00    0.068
2015-03-23 00:00:00-04:00    0.031
2015-06-22 00:00:00-04:00    0.067
2015-09-21 00:00:00-04:00    0.116
2015-12-21 00:00:00-05:00    0.024
2016-03-21 00:00:00-04:00    0.010
2016-06-20 00:00:00-04:00    0.125
2016-09-26 00:00:00-04:00    0.070
2016-12-23 00:00:00-05:00    0.006
2017-03-27 00:00:00-04:00    0.075
2017-06-26 00:00:00-04:00    0.055
2017-09-26 00:00:00-04:00    0.080
2017-12-26 00:00:00-05:00    0.026
2018-03-20 00:00:00-04:00    0.061
2018-06-25 00:00:00-04:00    0.085
2018-09-25 00:00:00-04:00    0.111
2018-12-24 00:00:00-05:00    0.008
2018-12-28 00:00:00-05:00    0.028
2019-03-26 00:00:00-04:00    0.120
2019-09-24 00:00:00-04:00    0.130
2019-12-23 00:00:00-05:00    0.041
2020-03-24 00:00:00-04:00    0.017
2020-06-23 00:00:00-04:00    0.094
2020-09-22 00:00:00-04:00    0.035
2020-12-21 00:00:00-05:00    0.081
2021-03-25 00:00:00-04:00    0.043
2021-06-24 00:00:00-04:00    0.085
2021-09-24 00:00:00-04:00    0.110
2021-12-27 00:00:00-05:00    0.192
2022-03-25 00:00:00-04:00    0.115
2022-06-24 00:00:00-04:00    1.845
2023-06-26 00:00:00-04:00    0.060
Name: Dividends, dtype: float64
2.3 PAID DATA – POLIGON.IO (NEWS ENDPOINT)
2.4 MACROECONOMICS
  • add some indicator examples
1
2
3
4
5
6
# Gold reserves excl. gold for China
 
gold_reserves = pdr.DataReader("TRESEGCNM052N", "fred", start=start)
 
gold_reserves.TRESEGCNM052N.plot.line()
1
2
3
4
5
#  CBOE Gold ETF Volatility Index (GVZCLS)
gold_volatility = pdr.DataReader("GVZCLS", "fred", start=start)
 
gold_volatility.GVZCLS.plot.line()
1
2
3
4
5
6
#  Crude Oil Prices: West Texas Intermediate (WTI) - Cushing, Oklahoma (DCOILWTICO)
oil_wti = pdr.DataReader("DCOILWTICO", "fred", start=start)
 
# there is a bug in the data? negative price?
oil_wti.DCOILWTICO.plot.line()
1
2
3
4
5
# Crude Oil Prices: Brent - Europe (DCOILBRENTEU)
oil_brent = pdr.DataReader("DCOILBRENTEU", "fred", start=start)
 
oil_brent.DCOILBRENTEU.plot.line()
1
2
3
4
5
6
7
8
9
10
11
12
13
# Web Scraping for Macro
# can't call directly via pd.read_html() as it returns 403 (forbidden) --> need to do a bit of work, but still no Selenium
import requests
from bs4 import BeautifulSoup
 
 
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}
 
response = requests.get(url, headers=headers)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Parse the HTML content of the webpage
    soup = BeautifulSoup(response.content, "html.parser")
 
    # You need to be able to find this table tag and read all behind it
    # Find the div with class "table-responsive"
    table_div = soup.find("div", class_="table-responsive")
 
    # Extract the table within the div
    table = table_div.find("table")
 
    # Use pandas to read the table into a DataFrame
    df = pd.read_html(str(table))[0]  # Assuming there's only one table, otherwise, loop through the list
 
    # Display the DataFrame
    print(df)
else:
    print("Failed to retrieve data from the webpage.")
          Unnamed: 0    Last  Previous  Highest    Lowest  \
0                   Currency   106.0     106.0   165.00     70.70   
1               Stock Market  5067.0    5051.0  5268.00      4.40   
2            GDP Growth Rate     3.4       4.9    34.80    -28.00   
3     GDP Annual Growth Rate     3.1       2.9    13.40     -7.50   
4          Unemployment Rate     3.8       3.9    14.90      2.50   
5          Non Farm Payrolls   303.0     270.0  4615.00 -20477.00   
6             Inflation Rate     3.5       3.2    23.70    -15.80   
7         Inflation Rate MoM     0.4       0.4     2.00     -1.80   
8              Interest Rate     5.5       5.5    20.00      0.25   
9           Balance of Trade   -68.9     -67.6     1.95   -103.00   
10           Current Account  -195.0    -196.0     9.96   -284.00   
11    Current Account to GDP    -3.0      -3.8     0.20     -6.00   
12    Government Debt to GDP   129.0     127.0   129.00     31.80   
13         Government Budget    -5.8     -12.4     4.50    -15.00   
14       Business Confidence    50.3      47.8    77.50     29.40   
15         Manufacturing PMI    51.9      52.2    63.40     36.10   
16     Non Manufacturing PMI    51.4      52.6    67.60     37.80   
17              Services PMI    51.7      52.3    70.40     26.70   
18       Consumer Confidence    77.9      79.4   111.00     50.00   
19          Retail Sales MoM     0.7       0.9    19.00    -14.60   
20          Building Permits  1458.0    1523.0  2419.00    513.00   
21        Corporate Tax Rate    21.0      21.0    52.80      1.00   
22  Personal Income Tax Rate    37.0      37.0    39.60     35.00   

        Unnamed: 5 Unnamed: 6  
0              NaN     Apr/24  
1           points     Apr/24  
2          percent     Dec/23  
3          percent     Dec/23  
4          percent     Mar/24  
5         Thousand     Mar/24  
6          percent     Mar/24  
7          percent     Mar/24  
8          percent     Mar/24  
9      USD Billion     Feb/24  
10     USD Billion     Dec/23  
11  percent of GDP     Dec/23  
12  percent of GDP     Dec/22  
13  percent of GDP     Dec/22  
14          points     Mar/24  
15          points     Mar/24  
16          points     Mar/24  
17          points     Mar/24  
18          points     Apr/24  
19         percent     Mar/24  
20        Thousand     Mar/24  
21         percent     Dec/24  
22         percent     Dec/23  
1
 
2.5) FINANCIAL REPORTING – EDGAR (IN YAHOO)
1
2
# let's check for NVDA
nvda =  yf.Ticker('NVDA')
1
2
# yearly financials for the last 4 years
nvda.financials
1
2
# balance sheet
nvda.balance_sheet
1
2
# Basic info:
nvda.basic_info
lazy-loading dict with keys = ['currency', 'dayHigh', 'dayLow', 'exchange', 'fiftyDayAverage', 'lastPrice', 'lastVolume', 'marketCap', 'open', 'previousClose', 'quoteType', 'regularMarketPreviousClose', 'shares', 'tenDayAverageVolume', 'threeMonthAverageVolume', 'timezone', 'twoHundredDayAverage', 'yearChange', 'yearHigh', 'yearLow']
1
2
# marketCap is quite useful, but don't know when it was updated? Daily?
nvda.basic_info['marketCap']/1e9
2185.3750610351562
# read this article for full info: https://zoo.cs.yale.edu/classes/cs458/lectures/yfinance.html
2.6 WEB SCRAPING – COMPANY INFO FOR CLUSTERING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# ask chatGPT: emulate clicking the link and downloading the content
import requests
from bs4 import BeautifulSoup
 
# URL of the webpage
 
# Define headers with a user-agent to mimic a web browser
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}
 
# Send a GET request to the URL with headers
response = requests.get(url, headers=headers)
 
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Parse the HTML content of the webpage
    soup = BeautifulSoup(response.content, "html.parser")
 
    # Find the download link within the webpage
    download_link = soup.find("a", {"rel": "nofollow", "href": "?download=csv"})
 
    # If the download link is found
    if download_link:
        # Extract the href attribute which contains the actual download link
        download_url = 'https://companiesmarketcap.com/'+download_link["href"]
 
        # Download the CSV file using the obtained download URL
        download_response = requests.get(download_url, headers=headers)
 
        # Check if the download request was successful
        if download_response.status_code == 200:
            # Save the content of the response to a local file
            with open("global_stocks.csv", "wb") as f:
                f.write(download_response.content)
            print("CSV file downloaded successfully.")
        else:
            print("Failed to download the CSV file.")
    else:
        print("Download link not found on the webpage.")
else:
    print("Failed to retrieve data from the webpage.")
1
2
3
global_stocks = pd.read_csv("global_stocks.csv")
 
global_stocks.head()
1
global_stocks.info()

MLOps Zoomcamp 2024 - Module 2

Module 2 - Experiment-Tracking Source https://github.com/DataTalksClub/mlops-zoomcamp/tree/main/02-experiment-tracking Homework Q1. Install...