### Project 2 resubmission

### 1 DEC 2023

### 29 NOV 2023

Vector Autoregression (VAR) is a sophisticated statistical method used for modeling the dynamic interactions between multiple time series variables. Unlike traditional univariate time series models like ARIMA, VAR extends the analysis to encompass several related variables simultaneously. It is a valuable tool in various fields, including economics, finance, epidemiology, and climate science, where understanding how multiple variables evolve over time is critical.

In VAR modeling, a set of time series variables is treated as a vector, with each variable representing a dimension. These variables are assumed to depend on their past values and the past values of all other variables in the system. The VAR model is typically expressed as a system of linear equations, where each equation corresponds to one variable, and the model captures how each variable is influenced by its own lagged values and the lagged values of all other variables in the system.

The fundamental components of a VAR model include the order (p), which specifies the number of lagged observations included in the model, and the coefficient matrices that determine the relationships between variables and their past values. These coefficients are estimated from the data. Additionally, VAR models have residuals (errors), which represent the differences between observed and predicted values. These residuals should exhibit white noise properties, indicating that the model adequately captures the data’s underlying patterns.

VAR models serve several essential purposes:

**Forecasting**: VAR models can generate short-term and long-term forecasts for each variable within the system, offering a flexible framework for capturing interactions among variables.**Causality Analysis**: They assist in identifying causal relationships among variables, though causal interpretations should be made with caution, especially in observational data.**Impulse Response Analysis**: VAR models allow for the examination of how shocks or innovations to one variable affect all variables in the system over time, providing insights into dynamic interdependencies.**Policy Analysis**: In economics, VAR models are employed to assess the impact of economic policies on various economic variables, such as GDP, inflation, and interest rates.

In summary, Vector Autoregression is a powerful and versatile multivariate time series modeling technique that enables researchers and analysts to explore complex interactions among multiple variables over time. It is a valuable tool for understanding and predicting the behavior of dynamic systems in diverse fields, making it an essential asset for decision-making, policy analysis, and scientific inquiry.

### 27 NOV 2023

Regression modeling is a fundamental statistical technique used for understanding and quantifying the relationship between a dependent variable (often referred to as the response or outcome variable) and one or more independent variables (predictors or explanatory variables). The primary objective of regression analysis is to establish a mathematical equation or model that can predict the value of the dependent variable based on the values of the independent variables. This modeling approach is widely applied across various disciplines, including economics, finance, social sciences, and natural sciences.

The most common type of regression analysis is linear regression, where the relationship between variables is assumed to be linear. In simple linear regression, there is a single predictor variable, while multiple linear regression involves two or more predictors. The model’s equation takes the form of Y = β0 + β1X1 + β2X2 + … + βnXn, where Y is the dependent variable, β0 is the intercept, β1, β2, …, βn are the coefficients, and X1, X2, …, Xn are the independent variables. The coefficients represent the strength and direction of the relationships.

Regression modeling provides several benefits, including:

**Prediction**: It allows for accurate predictions of the dependent variable’s values based on observed values of the independent variables, aiding in decision-making and planning.**Inference**: Regression analysis provides valuable insights into the relationships between variables, helping researchers draw meaningful conclusions and make hypotheses about causality.**Control**: It enables the control and manipulation of independent variables to observe their impact on the dependent variable, often used in experimental design.**Model Evaluation**: Various statistical measures, like R-squared, p-values, and residuals analysis, help assess the goodness of fit and reliability of the model.**Variable Selection**: Regression modeling can assist in identifying which independent variables have the most significant influence on the dependent variable and which can be omitted.

Regression analysis extends beyond linear regression, with other variants like logistic regression for binary outcomes, polynomial regression for curved relationships, and time series regression for temporal data. Advanced techniques such as ridge regression, Lasso regression, and regression trees offer solutions for complex data scenarios. In summary, regression modeling is a versatile and indispensable tool in statistics, enabling researchers and analysts to uncover relationships, make predictions, and gain a deeper understanding of the dynamics within datasets across various fields of study.

### 20 NOV 2023

SARIMA, which stands for Seasonal AutoRegressive Integrated Moving Average, is an extension of the traditional ARIMA (AutoRegressive Integrated Moving Average) model in statistics. SARIMA is a powerful tool for modeling and forecasting time series data that exhibit both non-seasonal and seasonal patterns. It’s particularly useful when dealing with data that displays regular fluctuations or seasonality at fixed time intervals, such as monthly sales data, quarterly economic indicators, or daily temperature readings.

SARIMA incorporates the following components:

**AutoRegressive (AR) Component**: Like in ARIMA, the AR component models the relationship between the current observation and its previous values. It captures the serial correlation in the data and is denoted by the order ‘p.’**Integrated (I) Component**: The integrated component deals with differencing the data to achieve stationarity, similar to ARIMA. The order of differencing is ‘d’ and is determined empirically.**Moving Average (MA) Component**: The MA component accounts for the correlation between the current observation and past error terms, reducing the effect of random shocks on the data. It’s represented by the order ‘q.’**Seasonal AutoRegressive (SAR) Component**: SARIMA introduces the seasonal AR component, denoted by ‘P,’ which models the relationship between the current observation and its previous seasonal observations at a fixed lag.**Seasonal Integrated (SI) Component**: The seasonal integrated component, denoted by ‘D,’ captures the differencing needed to remove seasonality at the seasonal lag. This order is determined empirically like ‘d.’**Seasonal Moving Average (SMA) Component**: Finally, the seasonal MA component, represented by ‘Q,’ accounts for the correlation between the current observation and past seasonal errors at the seasonal lag.

Selecting the appropriate orders for these components (p, d, q, P, D, Q) is a critical step in SARIMA modeling, usually guided by data analysis, visualization, and statistical criteria. Once determined, you can estimate the model parameters using methods like maximum likelihood estimation.

SARIMA models are versatile and capable of capturing complex seasonal patterns in time series data. They provide a robust framework for forecasting future values while considering both short-term and long-term dependencies. Software packages like Python’s statsmodels and R’s forecast library offer tools for implementing SARIMA models, making them accessible and widely used in various fields, including economics, finance, and climate science, among others, where seasonal patterns play a significant role in data analysis and prediction.

### Project1 Report resubmission

https://colab.research.google.com/drive/1Fn1ZMAqXBDbQ1OoJLhMQDhKS_yTLWBnh?usp=sharing

PROJECT-1 REPORT

### 17 NOV 2023

### 15 Nov 2023

ARIMA, which stands for Auto Regressive Integrated Moving Average, is a powerful and widely used time series forecasting method in statistics and econometrics. It is designed to model and predict time series data by capturing the underlying patterns, trends, and dependencies within the data. ARIMA models are particularly useful for analyzing data that exhibit temporal patterns, such as stock prices, economic indicators, and weather measurements.

ARIMA consists of three key components, each denoted by a parameter:

- Auto Regressive (AR) component: This part of the model represents the relationship between the current observation and its previous values. It accounts for the data’s serial correlation, where each data point is influenced by its recent history. The order of the AR component (denoted as ‘p’) specifies how many past observations are considered in the model.
- Integrated (I) component: The “integrated” component reflects the number of differencing operations needed to make the time series stationary. Stationarity is a critical assumption in time series analysis, as it ensures that statistical properties of the data remain constant over time. The order of differencing (denoted as ‘d’) is determined by the number of times differencing is required to achieve stationarity.
- Moving Average (MA) component: The MA component accounts for the correlation between the current observation and past error terms or “shocks” in the data. Similar to the AR component, the order of the MA component (denoted as ‘q’) specifies how many lagged error terms are considered.

The combination of these three components, denoted as ARIMA(p, d, q), forms the ARIMA model. Selecting appropriate values for p, d, and q is a crucial step in ARIMA modeling and is typically determined through data analysis, visual inspection, and statistical tests.

ARIMA models have been successful in various applications, including financial forecasting, demand forecasting, and time series analysis in economics. They provide a flexible and robust framework for capturing and predicting complex temporal patterns, making them an essential tool for analysts and researchers dealing with time series data. Moreover, ARIMA models have served as a foundation for more advanced time series forecasting methods, making them an important building block in the field of time series analysis.

### 13 NOV 2023

Time series analysis is a statistical approach that focuses on examining and interpreting data points collected over successive time intervals. This technique is crucial in various fields, including finance, economics, climate science, and epidemiology, among others. Time series data differ from other data types because they exhibit a temporal order, with observations recorded at regular intervals, such as daily, monthly, or yearly. The primary objective of time series analysis is to uncover underlying patterns, trends, and relationships within the data to facilitate informed decision-making and forecasting.

Time series analysis involves several key components, such as trend analysis to identify long-term directional movements, the detection of seasonality, which represents recurring patterns over specific time periods, and the separation of random noise from meaningful patterns. Autocorrelation analysis is also critical in understanding how current data points depend on previous ones. By employing various techniques like moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA) models, and advanced machine learning algorithms, analysts can extract valuable insights from time series data.

Ultimately, time series analysis empowers researchers, businesses, and policymakers to comprehend past behaviors, make predictions about future trends, and formulate effective strategies based on historical data. This statistical tool plays a pivotal role in unraveling the complexities of time-dependent data, facilitating better decision-making and planning across a wide range of domains.

### Project 2 Report

### 10th November 2023

Here I have used both fatal police shooting location values and police station location values from two different datasets. Using those values and plotting it on a map is very helpful to understand the distance between the shooting spot and the nearest police station. The police stations are indicated by red color and shooting spot is indicated by blue color. Using zoom option we can also observe more closely. And when you click on any red point it will show the police station count number and when you click on blue it shows the county or area name in tool tip. By this visualization its clear that most of the police shootings took place near by police stations.

### 8th November

A decision tree in statistical analysis is a graphical representation of a decision-making process. It is a tree-like model where each internal node represents a decision or test on an attribute, each branch represents an outcome of that test, and each leaf node represents a class label or a decision. Decision trees are used for both classification and regression analysis.

In a decision tree for classification, the goal is to classify an instance into one of several predefined classes. The tree is built by recursively splitting the data based on the most significant attribute at each node, creating decision rules that lead to the final classification.

In regression decision trees, the goal is to predict a continuous variable instead of a class label. The process is similar, with each node representing a decision based on an attribute, and the leaves containing the predicted values.

Decision trees are popular in machine learning and statistical analysis because they are easy to understand and interpret. They provide a visual representation of decision-making processes and can be a powerful tool for both analysis and prediction.

### 6th November

A t-test is a statistical hypothesis test used to determine if there’s a significant difference between the means of two groups or datasets. In the context of our dataset on police shootings, we can apply a t-test to compare specific characteristics of two groups and assess whether the observed differences are statistically significant.

For example, if we want to investigate whether there’s a significant difference in the ages of individuals involved in police shootings based on gender (male and female). To do this, we must first separate our dataset into these two groups. Then, we can apply a t-test to evaluate if the age means of these groups differ significantly. The null hypothesis in this case would be that there’s no significant age difference between the two gender groups, and the alternative hypothesis would suggest that there is a significant difference.

By performing a t-test, we can quantify the extent of the difference and determine whether it’s likely due to random chance or if it represents a meaningful distinction between the two groups. This allows us to draw statistical conclusions about the impact of gender on the ages of individuals involved in police shootings, providing valuable insights based on the characteristics of our dataset.

### 3rd November

ANOVA, or Analysis of Variance, is a statistical test used to compare the means of three or more groups or datasets to determine if there are statistically significant differences between them. It’s a powerful tool to assess whether variations between groups are due to actual differences or if they could have occurred by chance. In the context of our dataset of police shooting records, here’s how you can use ANOVA:

Suppose if we want to analyze whether there are statistically significant differences in the ages of individuals involved in police shootings across different threat levels (e.g., “attack,” “other,” etc.). To do this, we must first divide our dataset into groups based on threat levels. Then, we could apply ANOVA to assess whether there are significant variations in the ages of individuals across these groups.

In this way, ANOVA helps you identify if there are meaningful distinctions in the ages of individuals involved in police shootings, which could provide valuable insights into whether certain threat levels are associated with specific age groups.

### 1st November

K-Medoids is a clustering algorithm used in data analysis and machine learning. It’s a variation of the more common K-Means clustering method. K-Medoids, also known as Partitioning Around Medoids (PAM), is used to group similar data points into clusters, with a focus on robustness and the ability to handle outliers. Here’s a simple explanation:

Start with a set of data points, such as our dataset of police shooting records. In K-Medoids, we select k initial data points as “medoids” or cluster centers. These are not necessarily the means of the data points, as in K-Means, but actual data points from our dataset. Each data point is assigned to the nearest medoid, creating clusters based on proximity. The medoids are then updated by selecting the data point that minimizes the total dissimilarity (distance) to other data points within the same cluster. Repeat iteratively until the clusters no longer change significantly.

K-Medoids is especially useful when dealing with data that may have outliers or when you want to identify the most central or representative data points within clusters. It’s a bit more robust than K-Means because it doesn’t rely on the mean, which can be sensitive to extreme values. Instead, it uses actual data points as medoids, making it better suited for certain types of datasets. In your project, you might use K-Medoids to cluster police shooting records, potentially identifying the most representative cases within each cluster.

### 30th October

Hierarchical clustering is a method in data analysis and statistics used to group similar data points or objects into clusters or groups. It’s called “hierarchical” because it organizes data in a tree-like or hierarchy structure, where clusters can contain sub-clusters.

Here’s how it works:

**Data Points**: We start with our individual data points or objects. In our project, these could be individual records of police shootings.**Pairwise Distances**: Here we calculate the distances between all data points. In our case, it might be the geographical distances between different locations of police shootings.**Agglomeration**: Initially, each data point is considered as its own cluster. Then, the two closest clusters or data points are merged into a new cluster. This process continues iteratively, gradually forming a hierarchy of clusters.**Dendrogram**: As we merge clusters, we create a visual representation called a dendrogram. A dendrogram is like a tree diagram that shows how clusters are formed and nested within each other.

Hierarchical clustering helps us group data in a way that shows relationships and hierarchies. It’s like organizing objects based on their similarities, where smaller groups are nested within larger ones. This technique can be useful for exploring patterns and structures within our dataset, such as identifying different categories or groupings of police shootings.

### 27th October

The Monte Carlo approach is a method used to solve complex problems through random sampling. In simple words, it’s like making guesses repeatedly to find answers. We can use this approach with our dataset to simulate and analyze different scenarios. For example, if we’re studying the impact of a policy change on our dataset, we can create multiple random variations of our data, apply the policy change to each variation, and see how it affects the outcomes. By running many simulations, we can get a sense of the range of possible results and their probabilities, helping us to make informed decisions or predictions based on our dataset. It’s like rolling a dice many times to understand the likelihood of different outcomes without having to physically roll the dice every time.

### 23rd October

There are different methods of clustering: k-means, k-medoids, and DBSCAN.

**k-means** is a centroid-based clustering algorithm. It works by first randomly selecting k centroids, which are representative points of the data set. Then, each data point is assigned to the closest centroid. The centroids are then updated to be the average of the data points assigned to them. This process is repeated until the centroids no longer change.

**k-medoids** is also a centroid-based clustering algorithm, but instead of using the average of the data points assigned to a centroid, it uses the medoid, which is the most centrally located data point in the cluster. This makes k-medoids more robust to outliers than k-means.

**DBSCAN** is a density-based clustering algorithm. It works by identifying regions of high density in the data set. A data point is considered to be part of a cluster if it is within a certain distance (epsilon) of at least a minimum number of other data points (minPts). DBSCAN is able to identify clusters of arbitrary shape and size, and it is also robust to outliers.

### 20th October

Cohen’s d is a statistic used in statistical analysis to measure the difference between two groups. It helps us understand if the difference between these groups is significant or just due to random chance.

To calculate Cohen’s d, you need the means and standard deviations of two groups. The formula involves subtracting the mean of one group from the mean of the other and then dividing by the pooled standard deviation. The result gives a measure of how far apart the means of the two groups are in terms of standard deviations. A larger Cohen’s d indicates a more substantial difference between the groups, while a smaller value suggests a smaller difference

For example, if the Cohen’s d is 0.5, it indicates a moderate difference between the groups. If it’s 1.0 or higher, that’s considered a large difference. On the other hand, if it’s close to 0, it suggests that there’s not much of a difference between the groups, and any variations are likely due to random factors.

### 18th October

In this dataset, there are a total of 7,085 locations where police shootings occurred. These locations are represented by latitude and longitude coordinates. To work with these coordinates, we are using Mathematica, a mathematical software tool. We must then convert the latitude and longitude pairs into geographic position objects using the GeoPosition function. This allows us to work with the data in a geographic context. We can then create a geographic plot of all these locations, which helps us visualize where these incidents took place on a map. Additionally, we can use the GeoHistogram function to create a geographic histogram to see the distribution of these locations. This information is valuable for understanding the geographic patterns of police shootings in the continental United States.

### 16th October

Clustering is a technique in data analysis and machine learning used to group similar data points or objects together based on certain features or characteristics. The goal of clustering is to identify patterns and structures within the data, such that data points within the same cluster are more similar to each other than to those in other clusters.

The process of clustering police shooting locations in the continental US involves using geodesic distance, which calculates distances between latitude and longitude coordinates. In our case, Mathematica provides a convenient automatic clustering procedure called “Find Clusters.” What sets it apart is that it doesn’t require us to predefine the number of clusters.

To calculate the geodesic distance, which represents the distance along the Earth’s surface, between two locations using latitude and longitude data, we can use the Geo Distance function. This function takes the coordinates of two points and calculates the distance between them, taking into account the curvature of the Earth’s surface.

### 13th October

The project involves analyzing a dataset of fatal police shootings in the United States to gain insights and information about these incidents. The dataset contains information about individuals who were fatally shot by the police, including details such as their names, dates of the incidents, manner of death, whether they were armed, their age, gender, race, city, state, and more.

The dataset also documents the manner of death, specifying whether the individual was shot, shot and Tasered, or otherwise. It further classifies whether the individual was armed and with what, covering a range from guns and knives to being unarmed. The dataset provides insight into the presence of mental illness signs among the individuals and the perceived threat level they posed. It also notes whether the individuals were fleeing the scene and if body cameras were used during these encounters. Additionally, geographical coordinates (longitude and latitude) offer precise incident locations.

### 11th October

For the second project we are using K-NN method. K-nearest neighbors (k-NN) is a non-parametric supervised machine learning algorithm. It is used for classification and regression tasks. In classification, k-NN is used to predict the class of a new data point based on the classes of its k nearest neighbors. In regression, k-NN is used to predict the value of a new data point based on the values of its k nearest neighbors.

Here’s how it works:

- Imagine you have a bunch of data points in space, and each point represents something with certain characteristics.
- When you want to know something about a new data point, KNN looks at the “k” nearest data points to that new point.
- It then checks what those nearest data points are like (their characteristics), and based on the majority opinion of those neighbors, it predicts or classifies the new data point.

### Project 1 Report

### 4th October 2023

In this project, we began by preprocessing and transforming the data obtained from the Centers for Disease Control and Prevention (CDC) on U.S. county rates of diabetes, obesity, and inactivity which is lack of physical activity. Following this, we seamlessly merged the datasets based on FIPS code and year, creating a consolidated dataset for comprehensive analysis. My primary focus was on exploring the relationship between the percentage of diabetic individuals and the percentages of obesity and physical inactivity. Employing linear regression, we delineated a model to predict the percentage of diabetic individuals based on the percentages of obesity and inactivity. The training of the model involved splitting the data into training and testing sets, with subsequent predictions made and evaluated on the test set. Visualizing the correlation between these variables was a pivotal step, and we used a three-dimensional scatter plot to provide a comprehensive overview of their relationships.

### 2nd October 2023- Mon class

Regularization is a technique that prevents overfitting in statistical analysis. Overfitting occurs when a statistical model learns the training data too well and is unable to generalize to new data. Regularization works by penalizing the model for being too complex. This forces the model to learn only the most important features of the data, which makes it more likely to generalize well to new data. There are many different regularization techniques, some are

**L1 regularization:**L1 regularization penalizes the model for the sum of the absolute values of its coefficients. This forces the model to learn only the most important features of the data, because it will be penalized for including too many features.**L2 regularization:**L2 regularization penalizes the model for the sum of the squared values of its coefficients. This forces the model to learn smaller coefficients, which makes it less likely to overfit the data.

### 29th September 2023- Friday class

An example of cross validation is imagine learning to play a game, and to figure out how good you’re becoming, you decide to do a “5-round practice test.” In each round, you play the game a bit differently, learning from your mistakes and adjusting your strategy. After the 5 rounds, you look at your scores in each round and calculate the average score. This average score gives you a sense of how well you’re doing overall. In machine learning, we do something similar with cross-validation. Instead of training a model just once, we train it several times on different parts of our data, checking how well it does on average. It’s like making sure the model doesn’t just remember the training data but can also handle new situations effectively.

An example of bootstrap method is if you want to know the average height of students in a school, but measuring everyone is too time-consuming. Here’s where the bootstrap idea comes in. Instead of measuring every student, we randomly pick some students, record their heights, and put them back. Repeat this process several times, noting heights each time. By looking at all these recorded heights, we can make a good guess about the average height of all students. Bootstrap is like taking sneak peeks at smaller groups, putting them back into the “height pool,” and using those glimpses to figure out the average height without measuring every single student. It’s a clever way to estimate things without doing the whole task, making it much more practical, especially with large groups of data.

### 27th September 2023- Wed class

In today’s class we learnt about how we use tools like Least Squares Error (LSE) and Mean Squared Error (MSE) to measure how close our predictions are to reality. LSE helps find the best-fit line, and MSE gives us an average of how much our predictions differ from the actual values. Training Error tells us how well our model learned from the data it was trained on, while Test Error checks if our model can handle new, unseen data. It’s like checking if we studied well for a test (Training Error) and if we can apply that knowledge to questions we’ve never seen before (Test Error). Keeping an eye on these measures helps us make our models better over time.

Imagine you’re teaching a computer to make predictions, like guessing a person’s height based on some information. Least Squares Error and Mean Squared Error are like tools that help the computer learn the best way to make these guesses. Training Error is like checking how well the computer memorized the examples it learned from, while Test Error is like testing if it can correctly guess the height of someone it’s never seen before. It’s a bit like making sure the computer not only learns from its training data but can also adapt to new challenges.

### 25th September 2023 – Monday class

Cross-validation and bootstrap are two resampling methods used to evaluate statistical models and estimate generalization error and parameter variance, respectively. Cross-validation splits the data into folds and trains the model on each fold, using the remaining folds for validation. Bootstrap creates subsamples of the data with replacement and trains the model on each subsample. K-fold cross-validation is a resampling technique that splits the data into K folds and trains the model on K-1 folds, using the remaining fold for validation. This process is repeated K times, and the average performance of the model on the validation folds is used to evaluate its overall performance.

Suppose we have a dataset of images of cats and dogs, and we want to train a machine learning model to classify the images correctly. We can use K-fold cross-validation to evaluate the performance of our model. First, we split the dataset into K folds, where K is a positive integer. Then, we train the model on K-1 folds and evaluate its performance on the remaining fold. This process is repeated K times, and the average performance of the model on the validation folds is used to evaluate its overall performance. For example, if we choose K=5, we would split the dataset into 5 folds. Then, we would train the model on 4 folds and evaluate its performance on the remaining fold. This process would be repeated 5 times, and the average performance of the model on the validation folds would be used to evaluate its overall performance.

### 22nd September 2023- Friday class

Today I have learnt about t-test. A t-test is like a detective tool in statistics. Imagine you have two groups of data, and you want to know if they are really different or if it’s just a coincidence. The t-test helps you find out. It starts with a guess called the “null hypothesis,” which says there’s no real difference between the groups. Then, you collect data and calculate a special number called “t-statistic.” This number shows how much the groups really differ. If the t-statistic is big and the groups are very different, you get a small “p-value,” which is like a clue. A small p-value suggests that the groups are probably different for real, not just by chance. If the p-value is smaller than a number you choose (usually 0.05), you can say the groups are different, and you reject the null hypothesis. But if the p-value is big, it means the groups might not be that different, and you don’t have enough evidence to reject the null hypothesis. So, the t-test helps you decide if what you see in your data is a real difference or just a trick of chance.

### 20th September 2023- Wednesday class

In todays class we learnt that linear model fit to data is one that successfully captures the relationship between two variables, even when those variables exhibit characteristics that does not follow traditional assumptions. Here both of the variables involved in the model are non-normally distributed, meaning their data points do not follow the familiar bell-shaped curve. Furthermore, these variables may be skewed, indicating an asymmetry in their distribution, and they could exhibit high variance, implying that data points are spread widely across the range. Additionally, high kurtosis suggests that the data has heavy tails or outliers.

The crab molt model relates to the process of molting in crabs, particularly focusing on the sizes of a crab’s shell before and after molting. “Pre-molt” indicates the size of a crab’s shell before it undergoes molting. “Post-molt” indicates the size of the crab’s shell after the molting. Also a t-test is performed to determine whether there is a difference between the means of two groups.

### 18th September 2023- Monday class

Linear regression method helps us understand the relationship between variables. The independent variables are also called as predictor variables. So when we use two or more predictor variables we call it multiple linear regression. It extends simple linear regression by considering multiple predictors, allowing us to model complex relationships and make predictions based on the combined influence of these variables.

Correlation tells us if the two predictor things are related to each other. If they go up together, it’s a positive correlation. If one goes up while the other goes down, it’s negative. Strong correlations mean a tight connection, while weak ones mean they’re not closely linked.

Sometimes, the relationship between variables isn’t a straight line; it’s more like a curve. The quadratic model helps us deal with these curvy relationships, like when things go up and then down or the other way around. It’s like using a flexible ruler instead of a rigid one.

In summary, linear regression with two predictors helps us make predictions based on multiple factors. Understanding the correlation between predictors helps us see how they’re connected, and the quadratic model allows us to handle more complex relationships in the data. These tools are valuable for making sense of real-world data.

### 15th September 2023

Today I tried to analyze the data from the CDC (Centers for Disease Control) datasets. The datasets contains information about diabetes, obesity, and inactivity rates in different counties of various states in the year 2018.

We can perform linear regression method to understand the data for this project. By using this method we can understand how the variables diabetes, obesity, and inactivity are affected by the other variables such as FIPS (Federal Information Processing Standards), County and State. We can even know the correlation and differences between them. Here the diabetes, obesity, and inactivity are dependent variables and FIPS, county and state are independent variables. Here we also perform some statistical analysis like mean, median, maximum, minimum rates and also find standard deviation of the data which helps in better understanding of data. We can also visualize the data as a histogram to understand its distribution and also helps us to observe the outliers. The outliers of the data can also be detected using the best fit line of linear regression method. Outliers are the data points that deviate far away from the predicted data point.

### 13th September 2023- Wednesday class

In today class, I have learnt a new topic called P-value which is a probability of a null hypothesis to be true. Null hypothesis (H0)states that there is no relationship between variables. Alternative hypothesis(H1) states that there is a relationship between variables. The significant p -value is 0.05. For example, when we toss a coin the probability for a tail to occur is 0.5 and for a head to occur is 0.5. Let’s assume the null hypothesis H0 – The coin is fair and H1 – The coin is not fair. If we toss a coin for 20 times and everytime we get a tail then the value of p goes on decreasing which means there is something tricky or wrong in that case we should reject the null hypothesis. If the value of p is equal to significant value 0.05 then we can confirm the null hypothesis and state that coin is fair.

In previous class, I was confused about heteroscedasticity but today what i understood is heteroscedasticity is a condition where the data points keep fanning out from the best fit line (or) error or residual increases as the value of variable increases. In simple terms, heteroscedasticity occurs when there is huge difference between actual value and predicted value. This a visual way of finding heteroscedasticity present in the model. In statistical way we use Breusch-Pagan test to detects the presence of heteroscedasticity in the linear regression model. This test uses null and alternative hypothesis.

Steps to perform Breusch – Pagan test

1. Fit the linear regression model and detect the residuals.

2.Calculate the squared residuals (R*2) of the regression model.

3. Now fit a new linear regression model using the response values of R*2 (R-Squared).

4.Calculate the Chi Square test as nr*2

where n is total number of observations and r*2 is R-sqaured of new linear regression model.

If the value of p is less than 0.05 then we reject the null hypothesis and state that heteroscedasticity is present, if not it fail to reject the null hypothesis and states that homoscedasticity is present.

### 11th September 2023 – Monday class

In Linear regression we build a model and use the best fit line to predict the value of a variable using the value of another variable and also find the relationship between variables by using the formula y=**β**0+**β**1x+ε where x is the independent variable, y is the dependent variable, ε is the error term. The variable we predict is called the dependent variable and the variable we use to predict is called the independent variable. As the data doesn’t stand in a straight line and is not normally distributed we have errors which is the vertical distance between the data point and the regression line so we include the error in the formula. We use the least square method to minimize the errors.

We explored the CDC diabetes data which has 3 variables and they are diabetes, obesity, and inactivity. To build a model that truly fits the data and is reliable we need to find the relationship between variables using linear regression and use R square method to evaluate the model.

Also in today class got to know about what is a residual and learned a new topic heteroscedasticity which I will try to learn more about it and get to know better by the next class.

### Hello world!

Welcome to UMassD WordPress. This is your first post. Edit or delete it, then start blogging!