Hello! Good evening! For those of you new to Columbia, welcome! And welcome to my course, Storytelling with data.

How many of you are in your first semester at Columbia? Show of hands? Very cool!

Tonight I’d like us to get to know each other a little, and especially help you get to know your fellow classmates because you’ll be working together on group projects and have a great opportunity at Columbia to consider many perspectives as part of what you learn.

I will also introduce an example case study we will use to demonstrate various concepts, and along with the case study, we will briefly discuss some software we can use to start working on that case study.

So, Let’s begin with some introductions.

My name is Scott Spencer. I’ll be your teacher, one of your guides, in learning Storytelling with data this semester.

When I’m not teaching, I provide data science consulting on projects that I believe are good for us all, like using analysis to change people’s minds about mitigating adverse climate change. I also consult in professional sports, and am working on a super-secret project to simulate entire baseball games — don’t tell anyone!

To perform these analyses, I typically use Bayesian, and ideally generative, modeling implemented in Stan, which is a probabilistic programming language. Because I think I can be helpful, I give back to the Stan community by helping to develop interfaces, answer questions, squash bugs, and so forth. In the past, I have also provided data science analyses in engineering and legal fields.

As our course material makes some of my projects relevant, I will try to bring those into the discussion and share them with you. In fact, I’ll briefly do a little of that tonight.

We also get to have Dr. Laura Scherling helping to teach this course. Some of you may have already met her. She can give you yet another perspective on storytelling with data as it applies to her field and experience. I’m going to put her on the spot for just a minute, to let her tell you a little about what she does when she’s not helping you learn. Laura?

[LAURA INTRODUCES HERSELF]

[LAURA INTRODUCES HERSELF]

Thanks so much! Again, I love having your perspective and help with this course. Ok, so let’s spend a few minutes discussing our course format and we’ll also take a quick look at how to get outside the classroom help; also called office hours.

[OPEN CANVAS / WEBSITE]

First off, congrats on traveling to Zoomland to join the class! We’ll continue to meet on Mondays from 6:10 to 8pm, as we’re doing tonight. We’ll see how things play out for meeting in-person on campus, either during our scheduled time and / or when spring comes and we have grass and warmer weather, maybe we can meet together on the lawn for a coffee chat or something.

Along with holding a weekly discussion or lecture, Laura and I will both be available to schedule office hours. It’s pretty easy, just click the links here or on our Course website.

I’ll show you where in the menu bar now. The link will open our calendars and you can schedule whatever open slot works for you.

Let me click the links and show you.

[DEMONSTRATION]

Of course, we’re first going to encourage you to discuss your question and help to answer other student’s questions on our online discussion forum. Posting your question there will ensure you get the quickest response.

In fact, to help encourage you all to make it a habit in this class to contribute with your questions and answers to this online discussion, that will be part of your grade. The discussion tracks whether you ask or answer questions, and how many. Each week, part of your participation grade will reflect whether you’ve done that. You’ll get credit for at least minimally engaging here, but I encourage you to go beyond the minimum because a big part of a class here at Columbia is learning from each other. Speaking of which, you are all off to a great start by introducing yourselves in the discussion! Let’s remember to tag our questions and discussions to help everyone find them again later. For questions and discussions for this lecture, then, tag them under lecture 01. I’ll show you now. Make sense?

I’ll go over the details of how we’ll grade in a few minutes.

Speaking of saying hello, this getting to know one another is an important part of learning at Columbia and in my course.

So, along with saying hello in discussion, we’re going to have an in-class activity now. Laura and I are going to set you up in groups and have you work on a group resume, which will include the following: your collective skills with tools, education, work experience, hobbies. Once your groups are setup, I’ll give you about 10 minutes, then we’ll ask several groups to share. Sound good? I’ll ask Laura to set up the groups. Laura?

[STUDENTS GROUP UP AND SAY HELLO]

Welcome back from your breakout groups! Can I have someone from group [X] to share with the class your group resume?

Wonderful. I am excited to get to know each of you better both in class and during my office hours. We’ll get to know one another more throughout the semester. Ok, so what’s this course actually about? Storytelling with data.

Let’s turn to that question now.

Now data analytics commonly includes specialized knowledge that other executives and audiences tend to be less familiar with. And different audiences do not always use the same vocabulary to explain even the same concepts of interest.

Have any of you seen or experienced a situation where people having different roles in a business use different language to describe similar things, not realizing it?

Or have trouble understanding some underlying concept because they were unfamiliar with that language that was used to describe the concept?

These issues, or gaps in communication, are very common. And this can be especially problematic when concepts involving data analysis are involved.

So how can we think about resolving such a gap? Who do you feel would be better equipped to create common understanding across those persons interested in whatever the and business issue are?

Now various authors have proposed different approaches to bridging this gap in the context of communicating data analyses.

So be thinking about, who should do it? Should it be an individual? A team of people? Perhaps a business executive? Or maybe a data scientist would tend to be better equipped? How about my students? What skills might you need? One of the aims in this course is to enable you to bridge that gap.

The most fundamental idea is not specific to analytics: it applies to all communications we’re interested in here. And, by the way, I’m pulling this idea from a chapter from Jean-Luc Doumont’s book that I’ll have you read in a few weeks, which I’ve cited here.

Communications have a goal. A purpose. The components of whether our communication succeeds in its goal is.

Get our audiences to pay attention to, understand, and be able to act upon a maximum of messages, given constraints.

This single concept, what I’m also showing you on this word diagram, is so fundamental, you’ve going to grow tired of hearing me repeat it throughout this course. But I hope that by the end of this course, you will automatically begin to think about how each of your communications meet each of the components I’m showing you here.

Along with guiding and measuring our communications by this, we’ll also need to apply a range of skills. I’ve categorized a few of what various industry experts most commonly identify. Let’s take a look.

Reading together, to bridge these communication gaps, we need skills in project management. Data wrangling. Data analysis. Subject matter expertise. Design. And Storytelling.

Do all these broadly labelled skills seem to fit together?

As we move forward, be thinking about the types of things that we would expect to find within these skills, and to what extent the type of organization or organizations you are interested in working for and even leading, wants them. To some extend, you’ll be practicing all these in this course. Now let’s consider this course from another perspective.

If you think about what we just considered from Doumont, I think you’ll see that those goals in communication show up here. First off, our typical goal is to drive change in some way.

We see that right at the intersection of the Venn diagram I’m showing you here — our first visual — to help us think about this course. We’re in an applied analytics graduate program. So, perhaps obviously, our class should relate to that. And we represent data analyses in the first, or top, circle.

But there are many more useful skills in applied analytics. These include skills around writing narratives and creating visuals, and then ultimately bringing all three into a single communication with the goal to enabling change.

To help us think about these three overriding aspects, let’s relate them to quotes from a few characters.

To help give personality to each of these areas, I’ve added a quote to each. To start, we’ll have Sherlock Holmes, the quote in the upper right here, representing data analyses, who’s character shouted, “Data! Data! Data!”, “I can’t make bricks without clay.” Data was essential to assist the detective uncover insights to the mysteries he investigated. And it will be to you, too!

We’ll also study ideas from Daniel Kahneman later in the course. He explains, and I quote him in the lower left: “No one ever made a decision because of a number. They need a story.” That’s the “storytelling” part of our course title. And when we think deeply about narrative and writing, our communications about data analyses will improve dramatically. It’s common that students focus so hard on learning to use software tools that they lose site of this. Don’t lose site of this very important aspect.

Speaking of visuals, both narrative and data analyses are complimented well through the use of visuals in various forms. And the best part of visuals is captured well in a quote from John Tukey — he is a famous mathematician — , and he says, “The greatest value of a picture is when it forces us to notice what we never expected to see.” As we learn more about creating graphics, and we will start this course sort of getting you up to speed on creating visuals so that we can use them in communications. As we gain experience with visuals, we’ll come to appreciate what John Tukey means.

Our aim in this course, as I mentioned, is to bring each of these ideas together in an effort to enable decisions and change minds. It is to make an impact. To make an impact with various audiences, which we’ll come to learn about, and practice communicating with.

So let’s look at what kinds of deliverables we’ll use to give us practice with these three aspects of communication.

I’m representing all your assignments here as a graphic. The the pink represents individual assignments; the blue indicates group work. The weighting of these assignments are 40/50 individual and group. Within individual assignments, each one is 10 percent of your grade, and participation overall is 10 percent.

As I mentioned, the four homeworks will be focused on particular skills.

The first and second assignments give you practice with creating basic data visuals.

The third assignment shifts to communication. You’ll write a memo about an analytics project. Now this memo really bridges the work between individual and group activities. What I mean is each of your group members will have written a memo about an analytics project they might take on for an organization, and then the group can decide to tackle that project together, or the group may decide to do something entirely new and different from any of the memos. But the idea is that the individual memos helps you bring ideas to your group to get started.

Now the last, the fourth, individual assignment, gives you more practice in creating explanatory visuals, so thinking about how visuals help us communicate with others.

I’ll give you more details as we move forward in the next few weeks. Does that make sense? Now another part of how we’ll learn is by working on a case study all together, as a class. Let’s turn to that now.

Ok, we’ve started talking sort of abstractly about analytics projects. I’d like to introduce a few concepts for our class tonight by beginning a sort of case study. The concepts will touch upon the idea of data, and some software we can use to work with it.

In the real world, we’re usually confronted with a problem or opportunity, and then we try to gain insight into it in some way that enables decision making. We can use questions as tools to guide how we approach any given project. I’ve listed a few important questions here.

Let’s read these together: What problem is to be solved? Is the problem important? Be specific. Could an answer have impact? How? Do data have a role in solving the problem? Are the right data available? If so, where? If not, can we generate it somehow, like with an experiment? In what contexts may the data be generated? Finally, is the organization ready to tackle the problem and take actions from insights? The process is, in the real world, and in this class, an iterative process of matching data and analysis to possible goals and actions. Does that make sense?

Ok, so let’s go through a short example together.

From reading your hellos on discussion, quite a few of you are living in NYC and have some experiment living here.

Has anyone used Citi Bike, the New York bike sharing program? Cool. For those of you that haven’t had an opportunity in New York, have you tried renting bikes from bike shares elsewhere?

[SHARE SOME EXPERIENCES]

I use it all the time … this is a picture of me as a grad student at Columbia leaving campus on a Citi Bike one evening. I use the bikes both for basic transportation, and recently I’ve had fun on the electric versions of their bikes riding with my friends through central park when not riding my own bike. Or bikes, I should say. There are five bikes in my apartment. :o

What do you notice about this docking station?

[THERE ARE SEVERAL BIKES, AND SEVERAL PARKING SPOTS AVAILABLE. I WAS ABLE TO RENT ONE.]

And here’s another photo. I took this photo last semester. It’s another of Citi Bike’s docking station just outside Columbia University’s main campus. And along with the photo, I’m showing a screenshot of the Citi Bike app that lets us rent from that particular docking station. It provides information of the state of the docking station.

Now I see just a single bike among many docks. If that were the state of the docking station right now, and we all went outside campus to rent one, what’s likely to happen?

If the station has no bikes, or is completely full of bikes, we might say it’s imbalanced. By imbalanced, of course, I mean that either bikes or docks are unavailable until something changes. Make sense?

The New York Citi Bike bike share was set up in 2013. And imbalances at these docking stations have been an issue since there have been stations and bikes. Three years after it was setup, and continuing to grow, the Citi Bike spokeswoman explained that, in fact, I’m quoting her here: Let’s read together. “Rebalancing is one of the biggest challenges of any bike share system, especially in New York where residents don’t all work a traditional 9-5 schedule, and people work in a variety of other neighborhoods.”

For those of you who are not familiar with New York City yet, there are lots of neighborhoods within it to explore, and before the pandemic, work and lifestyle commute patterns were complex, and on top of that, the City plays host to probably more tourists than any other city in the United States, at least. Those tourists also use the bike shares.

So what might the natural flow of bikes depend on? Ideas?

[DISCUSSION]

Since the pandemic, as is the case in other major cities, work and lifestyle patterns have changed, and continues change, dramatically. Right? Would someone familiar with NYC help give us a sense in how our lives have changed?

[DISCUSSION]

The ideas in social distancing can certainly help us stay healthy. But as people return to the city, and continue to socially distance, this places an even greater burden on commuting within the city. In this New York Times article this year, which we’ll look more closely at in a later discussion, and in fact in your first homework, the author explains how biking and bike shares are becoming even more critical for this city.

With that context, how might we begin to help Citi Bike with rebalancing today? How might we approach this?

To start our conversation, would it make sense to first consider what Citi Bike does now?

From the cited article, I’m showing you four of the ways that Citi Bike has attempted to rebalance the bike share system. Anyone heard of Citi Bike’s Bike Angels program? You can join it, and you get biking credits by riding bikes from full stations to empty ones, woohoo!

They also use electric bikes and vans to move bikes from full stations to empty ones, and even in some locations with high traffic the use valet service that can help you.

Even with all this, rebalancing, as we saw a few minutes ago, remains a major problem. Last fall, teaching in-person near Columbia, the only time I was able to rent a bike, either to go to campus from my UWS apartment or to leave campus, was once the weather became quite cold. In warmer days, there were never any bikes!

So we will begin to explore this rebalancing issue together as an applied way to learn how to accomplish our communication goals in this course. And this leads us to our next in-class collective exercise.

I’m going to suggest that before thinking about data, start by thinking about the real world. How we experience it. Try to identify events and user behaviors associated with the NYC bike share. What events may be correlated with, or cause, empty or full docking stations? Ideas?

[STUDENT IDEAS]

For riders, are there certain behaviors or preferences that lead to the problem?

And, by the way, these images I’m showing you aren’t just for decoration. I never show you anything that I do not believe gives you information. And we’ll talk about that idea in this course, too. So these images, I’ve picked them to give you hints or ideas.

What analogous or related things can we draw comparisons from that give us more context?

Now that we’ve identified some possible events and behaviors, what information, what data, might we find available that relates to those. What measurements may be recorded? How can we find out? Might these data lead to insights that help inform decisions or goals? How will we find them? Google searching is probably a solid first approach.

Before searching or googling for these types of data sources, let’s discuss, for a moment, how we think of data. What are data?

Let’s first just read two definitions together. First, datum, which is the single form of data. “A datum is an abstraction of a real-world entity (person, object, or event). The terms variable, feature, and attribute are often used interchangeably to denote an individual abstraction.”

Now we can probably guess what a data set is. Let’s read this definition. “It consists of the data relating to a collection of entities, with each entity described in terms of a set of attributes. In its most basic form, a data set is organized in an n by m data matrix called the analytics record, where n is the number of entities (rows) and m is the number of attributes (columns).”

Hang with me a moment, I want us to consider just a few more definitions to differentiate types of variables, then we’ll start to apply them to our example.

So, it may seem obvious, data may be of different types. These types include nominal, ordinal, and numeric, and the numeric type may be on two scales.

First nominal. Nominal is another way of saying we’re giving attributes names, like apple or banana. Notice that it is not clear how we would order apples or bananas, is one greater than the other?

When we can naturally order them, we call that type ordinal.

Then, we have numeric types. Numeric types can be represented using integer or real values. Within these types, we can think of some on an interval scale, others on a ratio scale.

An easy example for understanding the difference in the two scale types is temperature. More specifically, consider the difference between Celsius and Kelvin. We cannot say that 20 degrees Celsius is double or twice that of 10 degrees Celsius, right?

But we can make such a comparison with Kelvin. That’s because Kelvin is on a absolute scale that begins with 0. We can say that 20 Kelvin is double 10 Kelvin. Does that difference make sense? Data and information may also be found as structured or unstructured. Let’s consider this next.

What do we mean by structured and unstructured data or information? Again, we can start with definitions and examples to work with. “Structured data are data that can be stored in a table, and every instance in the table has the same structure (i.e., set of attributes).” And unstructured data are simply data not yet organized to meet our definition of structure.

In a moment, let’s try to explore example of these data types and structures using data related to Citi Bike.

I’ve collected a few sources of data for us to explore that may be related to Citi Bike. And I encourage you to explore these links to data after our class. We’ll look at one of these together in class and you’ll get to look at another for your first homework.

First, the City of New York has made public its data recording every time a bike is unlocked and used. Second, I’ve included a link to Taxi usage in the city.

Similarly, I’ve included data on subway and MTA, that’s the Metra North Trainlines, usage.

Along with alternative transportation information, I’ve included a link to weather and traffic.

Finally, you’ll notice I’ve included land elevation data. Why might that relate to rebalancing?

So we’ve found some data. What do we do with it? Let’s spend a few minutes talking about components of a process for data analysis.

We’ll typically need to use some sort of software to explore and communicate the data.

And we can think of data exploration and analysis as a workflow. We import the data into our software analysis tools. We tidy and transform it. Meaning, we decide how to handle miscoded or missing data, we create new variables and summaries from what we’ve collected, we visualize it in its various ways, and model it.

Typically we repeat this process iteratively many times until we understand what insights the information provides. Then, we consider how to communicate those insights to whomever needs it: our audiences.

None of these steps are trivial. And as a diagram for workflow, this one is sort of very general and basic.

Now you’ll notice the entire diagram is labelled as program. Ideally, we want our entire workflow to be reproducible, for us and others, and coding is currently the best approach to accomplish these goals.

And two popular software choices for data analyses are R and Python, though their histories differ. R was developed specifically for data analysis, and has been evolving ever since.

Python was developed as a general programming language for teaching, and it too has been evolving ever since. Both languages are extended in their functionality through language libraries that others develop. I’ve created R packages for data visualization, for example.

I’ll be demonstrating ideas in this class using R for a few reasons. But tonight I’m also going to enable you to mimic the same materials I present in Python, if you run into that need. We will start a bit abstractly at first, and then apply these tools to our new Citi Bike case study.

[POLL]

Before we discuss them, let’s take a quick poll to see how many of you have used R before? How many of you have used Python before? How many have used both? How many of you have not used either?

Thank you. This is helpful to know. Now this course is not intended as a learn to program course in either language, but we will need to use software to create our visualizations for communication. So I’ll show you how we set things up to do just that, and show you the best software for its purposes along the way.

So in R, we can load libraries by calling a function called library and give it the library package name as the parameter.

We can load this into an R session as I have generically coded it here on the left.

Similarly, people have written libraries in Python and two that mimic the functionality I’ll be showing you from the R’s tidyverse in Python are called datar and plotnine. These Python libraries were ported from the R tidyverse packages for a reason.

I’ll later explain some of the things in all these libraries. But to access any library installed on your computer, you simply type them in like I’ve shown for the correct language environment.

Later, when we do this on the examples, we’ll actually run this code using RStudio. So we’ll switch back and forth between these visuals and coding it in my RStudio IDE.

After we load the libraries, we need to import our data. And it is very common to find data stored in a comma separated values or csv file. Here, I’m showing you one way you can import a csv file into either R or Python. The two are practically identical, right?

Now the imported data file can take many forms or structures. We will start simple, with a comma separate values file, a csv file.

I’m showing you a few example data structures in R and their sort of closest equivalent names in Python. We won’t go through them all at the moment, but I’m leaving these here for you as a reminder. The main structure I want to focus on here tonight is called a data frame.

You can think of a data frame as kind of like a spreadsheet table but where each column holds one particular data type and typically have a variable or column name. The rows represent observations of whatever types are in the respective columns. So a single element (like a cell in a spreadsheet) is one observation for one variable. Does that make sense? We’re going to create and use a data frame in our example in a few minutes.

We manipulate these data frames using functions in either language.

Before getting into any specific function, let’s review the basics of how functions work. And I’m showing a couple of general ways they are used in both languages, in a way that will become more useful soon.

Basically, we use a function by coding its name, and giving it anything it needs to do its job. The things it needs are called parameters. In the gray boxes, I’m showing you generic versions in both languages, for your reference, but let’s look at the colored example.

In the colored example here, the name of the function is “roll2”, and the thing it needs, the parameter, is called “bones”. The function gives you back, or returns, the sum of two die that are rolled. Now, our main course examples will not be about gambling, but this is a simple example for us to just examine a function.

Let’s look at a list of functions that we will commonly use for data analysis.

As I mentioned, we sometimes want to transform our data in several ways. And these functions are common things we do.

Finally, once we have the data frame in R, we usually need to modify it in various ways before we can visualize it or gain insights. So here, I’m just listing some important functions for manipulating data and showing you that the names pretty much map straight into Python’s libraries, too. And these functions are named, using the English language, as verbs telling you what they do. So think of them as English verbs too. Use the function “select” to select a variable. Use the function “rename” to rename a variable in your data frame. Use the function “filter” to filter rows out of your data frame.

Now a couple I’ve listed here take a little more practice to use. Those are pivot_longer and pivot_wider. These two important functions reshape your data.frame. And we’ll talk more about this. Are part of preparing data for analysis and communications, you’ll become very familiar with these functions as they are common whatever tool you use in the future.

Now, we can execute or run a single function and store the results in an object. But frequently we want to apply several functions to our data in one or more data frames. Let’s consider how we approach that.

In R, we can use something we call the pipe operator, which we code as the percent sign, followed by the greater than sign, and another percent sign. Here’s what it does. The pipe operator takes the thing on it’s left, typically your data frame, and gives it to the first parameter of the function to the right. And the first parameter of all the functions I just showed you expect you to give it a data frame. N ow Python generally works a bit differently and until some authors created Python libraries for this, the closest idea was to chain methods together. We won’t cover method chaining here, but you can google it to see how it works. I also have a short paper demonstrating python method chaining on my website if you’re interested.

The package I’m showing you here, datar, does let you mimic almost exactly what we will be doing using R with tidyverse. And I’m generically showing you how this looks in both languages.

So I’m hoping that as we develop software coding skills in this course using one language, R and its packages, that you’ll be able to look back on this information, whenever you need to, and be able to translate your R code to Python. I’m hoping you see that if you use these Python packages, it has a few style quirks but is otherwise almost identical.

Questions so far?

Ok, that’s enough abstract discussion for tools for the moment. Let’s start applying them to our case study.

To do that, we start the R (or Python) session, and then load the libraries that contain functions we’ll use. Let me show you now in RStudio inside something called an r markdown file. For your homework 1 you will use an r markdown file.

How many of you have used an r markdown file? Excellent, you are already ahead of the game.

Now that we have our libraries loaded, we need to read in our csv file. Let’s look at that code now. As we review the code and import the data, we’ll also look at a few rows of the data as examples of the definitions we just discussed.

Is this data structured or unstructured and what makes it so?

Can we identify any nominal data in this data frame? Do you see any variables you might consider ordinal? Do you see examples of numeric data?

[DEMO]

Now since we’re thinking about rebalancing, let’s keep that in mind as we review the variables in these data. I do not see anything that indicates whether a bike was rebalanced. Do you?

But let’s think a little harder about what that means.

We do see variables that may help us. These data, each observation, represents a single bike trip. It includes a bike id. A start station id. An end station id. And a trip start time. Now these data represent bike trips, not removal for rebalancing or repairs. That means, normally, when someone takes a bike and then parks it, and then someone else takes the bike, the first ride’s end station should match the next ride’s start station, right? Does that make sense?

Cool. So I think we can try to make our own rebalanced variable using these data we have. But we will need to transform these data. To do that, we’ll go through essentially the same logic that we just discussed. Let’s see this in action using code.

Next we’re going to modify our data frame in several ways. First, we’re going to change our variable (or column) names by changing white spaces to underscores because that’s easier to read and work with in code.

Then, we are going to filter or remove missing observations; more specifically, we’re going to remove observations that do not have a start_station_id because we won’t be able to match start to end, right?

Then we will order the observations so that each ride is in order that it happened. Third, we will group the data by each bike id. Finally, we will calculate our new variable indicating whether the bike was rebalanced. Let’s walk through this code and run it in our R session.

[DEMO]

Now that we have our modified data frame, we can visualize any aspect of it we want. For now, we’ll start really easy, and just map the count of our rebalanced variable to a bar.

[DEMO]

Later, soon we’ll dig much deeper into not only how this graphics software works but more importantly how to think about what we want to visualize for insights and for communications.

For tonight, I want to wrap up by showing you a couple of example interactive communications, where the graphics are not trivial like here. Towards the end of the semester we will be covering interactive communications. And that’s actually part of your group project.

The first example, you can look at yourself by clicking on the link here after class. This is actually an example I made for an earlier class, a couple of years ago, and it has a back story. One of their assignments was to create an information graphic. And I was using Citi Bike data to demonstrate things.

And to learn about information graphics, we looked at worldwide awards organizations to see what they said about information graphics. So some student raises his hand and asks, “Professor, have you submitted your work to any of these awards organizations?” Nope. And so, challenge accepted. I agreed to enter our class example in the Kantar Information is Beautiful Awards so they can see how their toy example stacked up to the real world. It was actually long listed for the award in visualizing mapping data.

I’ll wait to discuss what all the markings on this graphic for the moment, but you can click the text here later to interact with it on a web page.

Now let me show you one example from my actual work. This material is proprietary, so I can’t actually give you the file. But I’ll show you a portion of it, so you can get a sense of the possibilities of interactive communications.

Now I made this example from a larger range of tools than you’ll likely do in this class. But in previous classes I was able to get students to build similar types of interactivity. Here, I’m just showing you a static image of the interactions. Let me open the page and explain it.

[DEMO]

Ok, with that introduction to the class, let’s look at your first homework.

You’ll have a couple of weeks to work on it. I’ve just made it available to you now on courseworks. I’ve made this first assignment pretty easy. I’ve given you partial answers to the code and such. But the point is not that you might find the code you add easy to do so. The point of this first assignment is to understand what I’ve given you. Understand how I’ve coded it, what each line of code does, so that on later assignments you’ll be able to code it yourself. Does that make sense?

So this is both practice, but it’s also another form of learning.

Let’s take a quick peak together.

[DEMO]

Ok, at the end of each class, I give you a set of resources that I find helpful for what we’ve covered.

Most of these you’ll also find in the syllabus, but sometimes you’ll see a few new one’s specific to our discussion. Sort of a list of high quality references I’ve hand picked for you to dig into when you are ready for a deeper understanding.

And I do encourage you to at least find them and skim a bit to get a sense of what they are.

It was very nice to meet you all. I’ll stay for questions. Otherwise have a good night, and I’ll see you on our discussion forums until we meet next week!