ACM SIGCSE 2004: Multimedia Construction Projects Mark Guzdial

advertisement
ACM SIGCSE 2004:
Multimedia
Construction Projects
Funding to support this
work came from the
National Science
Foundation, Georgia Tech's
College of Computing,
GVU Center, Al West
Fund, and President's
Undergraduate Research
Award.
Mark Guzdial
College of Computing
Georgia Institute of Technology
guzdial@cc.gatech.edu
http://www.cc.gatech.edu/~mark.guzdial
Plan

7-7:10: Introductions


Why are we all here? Tune workshop to needs
7:10-7:30 (23 slides)
What’s on your CD
 Why media computation
 What we’re doing in the course at Georgia Tech



7:30-8:00: Picture manipulations in Python (65 slides)
8:00-8:15: Sound manipulations in Python (40 slides)





Video processing (probably won’t get to) (21 slides)
8:15-8:30: Break
8:30-8:45: Mapping to Java (Squeak, if desired)
8:45-9:45: You Play
9:45-10: Wrap-up
Introductions

Your name,
Where you’re from,
What do you teach where you’d like to use media
projects,
What languages you teach,
and
What do you want to get out of this workshop?
What’s on your CD

Materials from our Introduction to Media Computation
course




Python (Jython)



Media classes
DrJava
First three chapters of Media Computation book in Java
Squeak



Jython Environment for Students (JES)
Instructor Resources (how to tune for different places,
grading tool)
Java




Pre-release PDF of Media Computation book in Jython
Course slides
MediaTools: Squeak-based media exploration tools
Latest Squeak
Squeak computer music essays
Material for this workshop

Workshop, Java, and Squeak API slides
A Computer Science Undergraduate
Degree is Facing Challenging times

We’re losing students, at an increasing rate.
Women and minority percentage of
enrollment dropping
 High failure rates in CS1 (35-50% or more)
 Fewer applications into CS




“All programming jobs going overseas”
Research results: “Tedious,” “boring,”
“lacking creativity,” “asocial”
All of this at a time when we recognize the
critical role of IT in our economy, in all jobs
Strategy:
Ubiquitous Computing Education

Everyone needs computing, and
we should be able to teaching computing that everyone
cares about.
Make computing relevant and accessible to students.
 Minors, certificates, joint degrees, alternative paths to the
major.


At Georgia Tech, we do require every student to take an
introductory computing course.
Used to be just one, based on TeachScheme approach
(HTDP)
 Then came the “Recent Unpleasantness”…

Computer science is more
important than Calculus

In 1961, Alan Perlis argued
that computer science is more
important in a liberal
education than calculus



Explicitly, he argued that
all students should learn to
program.
Calculus is about rates, and
that’s important to many.
Computer science is about
process, which is important to
everyone
CS1315 Introduction to Media
Computation

Focus: Learning programming and CS concepts within the
context of media manipulation and creation




Converting images to grayscale and negatives, splicing and
reversing sounds, writing programs to generate HTML,
creating movies out of Web-accessed content.
Computing for communications, not calculation
Required course at Georgia Tech for Architecture,
Management, and Liberal Arts students; Optional for
Biology
121 students in Spring 2003,
303 in Fall ’03, and 395 for Spring ‘04

2/3 female in Spring 2003 MediaComp
Course Objectives

Students will be able to read, understand, and modify
programs that achieve useful communication tasks

Not programming from a blank piece of paper

Students will learn what computer science is about,
especially data representations, algorithms, encodings,
forms of programming.

Students will learn useful computing skills, including
graphing and database concepts
Python as the programming
language

Huge issue

Use in commercial contexts
authenticates the choice

IL&M, Google, Nextel, etc.

Minimal syntax

Looks like other
programming languages

Potential for transfer
Rough overview of Syllabus


Defining and executing functions
Pictures
Psychophysics, data structures, defining functions, for
loops, if conditionals
 Bitmap vs. vector notations


Sounds
Psychophysics, data structures, defining functions, for
loops, if conditionals
 Sampled sounds vs. synthesized, MP3 vs. MIDI


Text
Converting between media, generating HTML, database,
and networking
 A little trees (directories) and hash tables (database)



Movies
Then, Computer Science topics (last 1/3 class)
Some Computer Science Topics
inter-mixed

We talk about algorithms across
media



We talk about representations and
mappings (Goedel)


Sampling a picture (to scale it) is
the same
algorithm as sampling a sound
(to shift frequency)
Blending two pictures (fading
one into the other) and two
sounds is the same algorithm.
From samples to numbers (and
into Excel), through a mapping
to pixel colors
We talk about design and debugging

But they mostly don’t hear us
Computer Science Topics
as solutions to their problems

“Why is PhotoShop so much faster?”
Compiling vs. interpreting
 Machine language and how the computer works


“Writing programs is hard! Are there ways to make it
easier? Or at least shorter?”
Object-oriented programming
 Functional programming and recursion


“Movie-manipulating programs take a long time to
execute. Why? How fast/slow can programs be?”

Algorithmic complexity
Does this motivate and engage
students?

Homework
assignments suggest
they were.


Shared on-line in
collaborative web
space (CoWeb)
Some students
reported writing
programs outside of
class for fun.
Does this motivate and engage
students?
Does this motivate and engage
students?
Soup
Stephen Hawking
Latest: Spring 2004
“Well, I looked at last years’
collages, and I certainly can’t
be beat.”
Relevance through Data-first
Computing


Real users come to a user with data that they care about,
then they (unwillingly) learn the computer to manipulate
their data as they need.
“Media Computation” works like that.



We use pictures of students in class demonstrations.
Students do use their own pictures as starting points for
manipulations.

Some students reversed sounds looking for hidden messages.

How often do students use their second week of CS1 on their
own data?
How does that change the students’ relationship to the
material?
They started doing this in the second week

Does the class work to address
failure/drop rates?

In Spring 2003, 121 students (2/3
female),
3 drops




Fall 2003, 303 students,
8 drops
Spring 2004, 395 students, 19
drops
Success
Rate
Average
GT’s CS1
(2000-2002)
60% of students surveyed at
end of Sp2003 course say
that they want a second
course.
Media
Computation
Spring 2003

Media
Computation
Fall 2003
These are non-majors, who
have already fulfilled their
requirement
We are getting transfers into
the CS major.
72.2%
88.5%
87.5%
Were Students Motivated and
Engaged?
Q. What do you like best about this course?
Survey responses
(Sp03) suggest that
students responded
well to the context of
media manipulation
and creation.
Course
Don't like
it/Nothing
Enjoy
Content
Content
is
Useful
Trad.
CS1
18.2%
12.1%
0.0%
Engin
12.9%
16.1%
25.8%
Media
Comp
0.0%
21.3%
12.4%
How did Women Respond to the
Course?

Did we make it:
Relevant?
 Creative?
 Social?

How did Women Respond to the
Course?

Did we make it:
Relevant?
“I dreaded CS, but ALL of the topics thus far have been
applicable to my future career (& personal) plans—there
isn't anything I don't like about this class!!!”
 Creative?
 Social?

How did Women Respond to the
Course?

Did we make it:
Relevant?
 Creative?
“I just wish I had more time to play around with that and
make neat effects. But JES will be on my computer
forever, so… that’s the nice thing about this class is that
you could go as deep into the homework as you wanted.
So, I’d turn it in and then me and my roommate would do
more after to see what we could do with it.”
 Social?

How did Women Respond to the
Course?

Did we make it:
20% of Spring 2003 students said
“Collaboration” was best part of
CS1315
Relevant?
 Creative?
 Social?
“Actually, I think [collaboration] is one of the best things
about this class. My roommate and I abided by all the
rules... but we took full advantage of the collaboration. It
was more just the ideas bouncing off each other. I don’t
think this class would have been as much fun if I wasn’t
able to collaborate.”
On CoWeb use: “Yes, it’s not just about the class…
people talk about anything, it’s a little bit more friendly
than just here’s your assignment.”

Next steps…
A second course and an alternative path
 CS1316 Representing structure and behavior to be
offered in Spring 2005





Essentially, data structures in a media context
Lead into how professional animators and computer
musicians do their programming
The two courses (CS1315 and CS1316) will be sufficient
to take the rest of our traditional CS courses
Defining a CS minor now
Creating BS in Computational Media

Joint with School of Literature, Communication, and
Culture
Next steps…
Moving beyond GT

Versions of Media Computation appearing at other
institutions


Gainesville College (2 year in Ga.) has offered the
course twice
DePauw, Brandeis (in Scheme), Georgia Perimeter
College (in Java), U. California Santa Cruz, and U.
Maryland at College Park (in Java) teaching their
own versions using our materials.
Next steps…
Moving beyond Undergrad

Teaching Georgia’s HS teachers how to teach
programming starting Summer 2004

Using a MediaComp approach


After all, teachers are typically non-CS majors…
Helping the State certify CS teachers (for No Child Left
Behind Act), and lead to more CS Advanced Placement
teachers

Developing two workshops



From no-programming to teaching-programming in 2 weeks
From teaching-programming to teaching-Java-AP in 1 week
75 teachers this summer (45 from-scratch, 30 to-AP)
Image Processing


We’re going to use Python as a kind of pseudocode.
Goals:


Give you the basic understanding of image
processing, including psychophysics of sight,
Identify some interesting examples to use,

Especially those that have the potential for going deep
into CS content.
We perceive light different from
how it actually is

Color is continuous

Visible light is in the wavelengths between 370 and 730
nanometers


That’s 0.00000037 and 0.00000073 meters
But we perceive light with color sensors that peak around
425 nm (blue), 550 nm (green), and 560 nm (red).



Our brain figures out which color is which by figuring out how
much of each kind of sensor is responding
One implication: We perceive two kinds of “orange” — one that’s
spectral and one that’s red+yellow (hits our color sensors just right)
Dogs and other simpler animals have only two kinds of sensors

They do see color. Just less color.
Luminance vs. Color

We perceive borders of things,
motion, depth via luminance
Luminance is not the
amount of light, but our
perception of the amount
of light.
 We see blue as “darker”
than red, even if same
amount of light.


Much of our luminance
perception is based on
comparison to backgrounds,
not raw values.
Luminance is actually
color blind. Completely
different part of the brain.
Digitizing pictures as bunches
of little dots


We digitize pictures into lots of little dots
Enough dots and it looks like a continuous
whole to our eye




Our eye has limited resolution
Our background/depth acuity is particulary low
Each picture element is referred to as a pixel
Pixels are picture elements


Each pixel object knows its color
It also knows where it is in its picture
Encoding color


Each pixel encodes color at that position in the picture
Lots of encodings for color
Printers use CMYK: Cyan, Magenta, Yellow, and blacK.
 Others use HSB for Hue, Saturation, and Brightness (also
called HSV for Hue, Saturation, and Brightness


We’ll use the most common for computers

RGB: Red, Green, Blue
RGB

In RGB, each color has three
component colors:
Amount of redness
 Amount of greenness
 Amount of blueness



Each does appear as a separate
dot on most devices, but our
eye blends them.
In most computer-based
models of RGB, a single byte
(8 bits) is used for each

So a complete RGB color is
24 bits, 8 bits of each
Encoding RGB


Each component color (red,
green, and blue) is encoded as
a single byte
Colors go from (0,0,0) to
(255,255,255)

If all three components are
the same, the color is in
greyscale

(50,50,50) at (2,2)
(0,0,0) (at position (1,2) in
example) is black
 (255,255,255) is white

Is that enough?

We’re representing color in 24 (3 * 8) bits.



That’s 16,777,216 (224) possible colors
We can see more colors than that
But the real limitation is the physical devices:
We can get 16 million colors out of a monitor
 But that doesn’t cover all of the colors we can see


Some graphics systems support 32 bits per pixel

May be more pixels for color, or an additional 8 bits
to represent 256 levels of translucence
Basic Picture Functions



makePicture(filename) creates and returns a
picture object, from the JPEG file at the filename
show(picture) displays a picture in a window
We’ll learn functions for manipulating pictures
later, like getColor, setColor, and repaint
Writing a recipe:
Making our own functions





To make a function, use the
command def
Then, the name of the function,
and the names of the input values
between parentheses (“(input1)”)
End the line with a colon (“:”)
The body of the recipe is indented
(Hint: Use two spaces)
Your function does NOT
exist for JES until you
load it
Use a loop!
Our first picture recipe
def decreaseRed(picture):
for p in getPixels(picture):
value=getRed(p)
setRed(p,value*0.5)
Used like this:
>>> file="/Users/guzdial/mediasources/barbara.jpg"
>>> picture=makePicture(file)
>>> show(picture)
>>> decreaseRed(picture)
>>> repaint(picture)
def clearRed(picture):
for pixel in getPixels(picture):
setRed(pixel,0)
def greyscale(picture):
for p in getPixels(picture):
redness=getRed(p)
greenness=getGreen(p)
blueness=getBlue(p)
luminance=(redness+blueness+greenness)/3
setColor(p,
makeColor(luminance,luminance,luminance))
def negative(picture):
for px in getPixels(picture):
red=getRed(px)
green=getGreen(px)
blue=getBlue(px)
negColor=makeColor(255-red,255-green,255-blue)
setColor(px,negColor)
Increasing Red
def increaseRed(picture):
for p in getPixels(picture):
value=getRed(p)
setRed(p,value*1.2)
What happened here?!?
Remember that the limit
for redness is 255.
If you go beyond 255, all
kinds of weird things can
happen: Wrap around
Clearing Blue
def clearBlue(picture):
for p in getPixels(picture):
setBlue(p,0)
Combining into a sunset function


How do we turn this beach
scene into a sunset?
What happens at sunset?

At first, I tried increasing
the red, but that made
things like red specks in
the sand REALLY
prominent.


That can’t be how it really
works
New Theory: As the sun
sets, less blue and green is
visible, which makes things
look more red.
A Sunset-generation Function
def makeSunset(picture):
for p in getPixels(picture):
value=getBlue(p)
setBlue(p,value*0.7)
value=getGreen(p)
setGreen(p,value*0.7)
Creating a negative

Let’s think it through


R,G,B go from 0 to 255
Let’s say Red is 10. That’s very light red.



What’s the opposite? LOTS of Red!
The negative of that would be 245: 255-10
So, for each pixel, if we negate each color
component in creating a new color, we negate the
whole picture.
Recipe for creating a negative
def negative(picture):
for px in getPixels(picture):
red=getRed(px)
green=getGreen(px)
blue=getBlue(px)
negColor=makeColor( 255-red, 255-green, 255-blue)
setColor(px,negColor)
Original, negative, negativenegative
Introducing information-preserving
transformation (as opposed to
grayscale, which loses information)
Converting to greyscale

We know that if red=green=blue, we get grey



But what value do we set all three to?
What we need is a value representing the darkness of the
color, the luminance
There are lots of ways of getting it, but one way that
works reasonably well is dirt simple—simply take the
average:
Converting to grayscale
def grayScale(picture):
for p in getPixels(picture):
intensity = (getRed(p)+getGreen(p)+getBlue(p))/3
setColor(p,makeColor(intensity,intensity,intensity))
But that’s not really the best
grayscale

In reality, we don’t perceive red, green, and blue
as equal in their amount of luminance: How
bright (or non-bright) something is.



We tend to see blue as “darker” and red as
“brighter”
Even if, physically, the same amount of light is
coming off of each
Photoshop’s grayscale is very nice: Very similar
to the way that our eye sees it

B&W TV’s are also pretty good
Building a better grayscale

We’ll weight red, green, and blue based on how
light we perceive them to be, based on laboratory
experiments.
Starting to talk about
more than one way to
def greyScaleNew(picture):
do the same/similar
for px in getPixels(picture):
thing.
newRed = getRed(px) * 0.299
newGreen = getGreen(px) * 0.587
newBlue = getBlue(px) * 0.114
luminance = newRed+newGreen+newBlue
setColor(px,makeColor(luminance,luminance,luminance))
Replacing colors using IF


We don’t have to do one-to-one changes or
replacements of color
We can use if to decide if we want to make a
change.



We could look for a range of colors, or one specific
color.
We could use an operation (like multiplication) to
set the new color, or we can set it to a specific
value.
It all depends on the effect that we want.
Posterizing: Reducing range of
colors
Posterizing: How we do it

We look for a range of colors, then map them to a
single color.





If red is between 63 and 128, set it to 95
If green is less than 64, set it to 31
...
It requires a lot of if statements, but it’s really
pretty simple.
The end result is that a bunch of different colors,
get set to a few colors.
Posterizing function
def posterize(picture):
#loop through the pixels
for p in getPixels(picture):
#get the RGB values
red = getRed(p)
green = getGreen(p)
blue = getBlue(p)
#check and set red values
if(red < 64):
setRed(p, 31)
if(red > 63 and red < 128):
setRed(p, 95)
if(red > 127 and red < 192):
setRed(p, 159)
if(red > 191 and red < 256):
setRed(p, 223)
#check and set green values
if(green < 64):
setGreen(p, 31)
if(green > 63 and green < 128):
setGreen(p, 95)
if(green > 127 and green < 192):
setGreen(p, 159)
if(green > 191 and green < 256):
setGreen(p, 223)
#check and set blue values
if(blue < 64):
setBlue(p, 31)
if(blue > 63 and blue < 128):
setBlue(p, 95)
if(blue > 127 and blue < 192):
setBlue(p, 159)
if(blue > 191 and blue < 256):
setBlue(p, 223)
Generating sepia-toned prints


Pictures that are sepia-toned have a yellowish tint
to them that we associate with older pictures.
It’s not directly a matter of simply increasing the
yellow in the picture, because it’s not a one-toone correspondence.


Instead, colors in different ranges get mapped to
other colors.
We can create such a mapping using IF
Example of sepia-toned prints
Here’s how we do it
def sepiaTint(picture):
#Convert image to greyscale
greyScaleNew(picture)
#loop through picture to tint pixels
for p in getPixels(picture):
red = getRed(p)
blue = getBlue(p)
#tint shadows
if (red < 63):
red = red*1.1
blue = blue*0.9
#tint midtones
if (red > 62 and red < 192):
red = red*1.15
blue = blue*0.85
#tint highlights
if (red > 191):
red = red*1.08
if (red > 255):
red = 255
blue = blue*0.93
#set the new color values
setBlue(p, blue)
setRed(p, red)
Let’s try making Barbara a redhead!

We could just try increasing the redness, but as
we’ve seen, that has problems.



Overriding some red spots
And that’s more than just her hair
If only we could increase the redness only of the
brown areas of Barb’s head…
Making Barb a redhead
def turnRed():
brown = makeColor(57,16,8)
file = r"C:\Documents and Settings\Mark Guzdial\My
Documents\mediasources\barbara.jpg"
picture=makePicture(file)
for px in getPixels(picture):
Distance() is
color = getColor(px)
cartesianif distance(color,brown)<50.0:
redness=getRed(px)*1.5
setRed(px,redness)
coordinate
distance
show(picture)
return(picture)
Original:
Tuning our color replacement

If you want to get more of Barb’s hair, just
increasing the threshold doesn’t work


Wood behind becomes within the threshold value
How could we do it better?


Lower our threshold, but then miss some of the hair
Work only within a range…
Introducing the function range

Range returns a sequence between its first two
inputs, possibly using a third input as the
increment
>>> print range(1,4)
[1, 2, 3]
>>> print range(-1,3)
[-1, 0, 1, 2]
>>> print range(1,10,2)
[1, 3, 5, 7, 9]
Replacing colors
in a range
Get the range
using
MediaTools
def turnRedInRange():
brown = makeColor(57,16,8)
file=r"C:\Documents and Settings\Mark Guzdial\My
Documents\mediasources\barbara.jpg"
picture=makePicture(file)
for x in range(70,168):
for y in range(56,190):
px=getPixel(picture,x,y)
color = getColor(px)
if distance(color,brown)<50.0:
redness=getRed(px)*1.5
setRed(px,redness)
show(picture)
return(picture)
Mirroring



Imagine a mirror horizontally across the picture,
or vertically
What would we see?
How do generate that digitally?

We simply copy the colors of pixels from one place
to another
Mirroring a picture


Slicing a picture down the middle and sticking a mirror on the slice
Do it by using a loop to measure a difference


The index variable is actually measuring distance from the
mirrorpoint
Then reference to either side of the mirror point using the difference
Recipe for mirroring
def mirrorVertical(source):
mirrorpoint = int(getWidth(source)/2)
for y in range(1,getHeight(source)):
for x in range(1,mirrorpoint):
p = getPixel(source, x+mirrorpoint,y)
p2 = getPixel(source, mirrorpoint-x,y)
c = getColor(p2)
setColor(p,c)
Can we do it with a horizontal
mirror?
def mirrorHorizontal(source):
mirrorpoint = int(getHeight(source)/2)
for y in range(1,mirrorpoint):
for x in range(1,getWidth(source)):
p = getPixel(source,x,y+mirrorpoint)
p2 = getPixel(source,x,mirrorpoint-y)
setColor(p,getColor(p2))
Of course!
What if we wanted to copy bottom
to top?

Very simple: Swap the p and p2 in the bottom line

Copy from p to p2, instead of from p2 to p
def mirrorHorizontal(source):
mirrorpoint = int(getHeight(source)/2)
for y in range(1,mirrorpoint):
for x in range(1,getWidth(source)):
p = getPixel(source,x,y+mirrorpoint)
p2 = getPixel(source,x,mirrorpoint-y)
setColor(p2,getColor(p))
Messing with Santa some more
Doing something useful with
mirroring


Mirroring can be used to
create interesting effects,
but it can also be used to
create realistic effects.
Consider this image that
from a trip to Athens,
Greece.

Can we “repair” the
temple by mirroring the
complete part onto the
broken part?
Figuring out where to mirror

Use MediaTools to find the mirror point and the range
that we want to copy
Program to mirror the temple
def mirrorTemple():
source = makePicture(getMediaPath("temple.jpg"))
mirrorpoint = 277
setMediaPath() and
lengthToCopy = mirrorpoint - 14
getMediaPath(baseName)
for x in range(1,lengthToCopy):
allow us to set a media
for y in range(28,98):
folder
p = getPixel(source,mirrorpoint-x,y)
p2 = getPixel(source,mirrorpoint+x,y)
setColor(p2,getColor(p))
show(source)
return source
Did it really work?


It clearly did the
mirroring, but that doesn’t
create a 100% realistic
image.
Check out the shadows:
Which direction is the sun
coming from?
Copying pixels

In general, what we want to do is to keep track of
a sourceX and sourceY, and a targetX and targetY.

We increment (add to them) in pairs
sourceX and targetX get incremented together
 sourceY and targetY get incremented together


The tricky parts are:
Setting values inside the body of loops
 Incrementing at the bottom of loops

Copying Barb to a canvas
def copyBarb():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
targetX = 1
for sourceX in range(1,getWidth(barb)):
targetY = 1
for sourceY in range(1,getHeight(barb)):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetX,targetY), color)
targetY = targetY + 1
targetX = targetX + 1
show(barb)
show(canvas)
return canvas
Transformation =
Small changes in copying

Making relatively small changes in this basic
copying program can make a variety of
transformations.




Change the targetX and targetY, and you copy
wherever you want
Cropping: Change the sourceX and sourceY range,
and you copy only part of the program.
Rotating: Swap targetX and targetY, and you end up
copying sideways
Scaling: Change the increment on sourceX and
sourceY, and you either grow or shrink the image.
Copying into the middle of the
canvas
def copyBarbMidway():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
targetX = 100
for sourceX in range(1,getWidth(barb)):
targetY = 100
for sourceY in range(1,getHeight(barb)):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetX,targetY), color)
targetY = targetY + 1
targetX = targetX + 1
show(barb)
show(canvas)
return canvas
Rotating the copy
def copyBarbSideways():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
targetX = 1
for sourceX in range(1,getWidth(barb)):
targetY = 1
for sourceY in range(1,getHeight(barb)):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetY,targetX), color)
targetY = targetY + 1
targetX = targetX + 1
show(barb)
show(canvas)
return canvas
Cropping: Just the face
def copyBarbsFace():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
targetX = 100
for sourceX in range(45,200):
targetY = 100
for sourceY in range(25,200):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetX,targetY), color)
targetY = targetY + 1
targetX = targetX + 1
show(barb)
show(canvas)
return canvas
Scaling the picture down
def copyBarbsFaceSmaller():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
sourceX = 45
for targetX in range(100,100+((200-45)/2)):
sourceY = 25
for targetY in range(100,100+((200-25)/2)):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetX,targetY), color)
sourceY = sourceY + 2
sourceX = sourceX + 2
show(barb)
show(canvas)
return canvas
Scaling the picture up
def copyBarbsFaceLarger():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
sourceX = 45
for targetX in range(100,100+((200-45)*2)):
sourceY = 25
for targetY in range(100,100+((200-25)*2)):
color = getColor(getPixel(barb,int(sourceX),int(sourceY)))
setColor(getPixel(canvas,targetX,targetY), color)
sourceY = sourceY + 0.5
sourceX = sourceX + 0.5
show(barb)
show(canvas)
return canvas
What to do about scaling?


How do we clear up the degradation of scaling
up?
Variety of techniques, but mostly following the
same basic idea:


Use the pixels around to figure out what color a
new pixel should be, then somehow (e.g., by
averaging) compute the right color.
Different techniques look at different pixels and
compute different averages in different ways.
A blurring recipe
This isn’t very
efficient, nor very
effective. There are
lots of different ways
of implementing
blurs of various
kinds.
def blur(pic,size):
for pixel in getPixels(pic):
currentX = getX(pixel)
currentY = getY(pixel)
r=0
g=0
b=0
count = 0
for x in range(currentX - size,currentX + size):
for y in range(currentY - size, currentY + size):
if(x<0) or (y<0) or (x >= getWidth(pic)) or (y >=getHeight(pic)):
pass # Skip if we go off the edge
else:
r = r + getRed(getPixel(pic,x,y))
g = g + getGreen(getPixel(pic,x,y))
b = b + getBlue(getPixel(pic,x,y))
count = count + 1
newColor = makeColor(r/count,g/count,b/count)
setColor(pixel,newColor)
Blurring out the pixelation
Background subtraction



Let’s say that you have a picture of someone, and
a picture of the same place (same background)
without the someone there, could you subtract out
the background and leave the picture of the
person?
Maybe even change the background?
Let’s take that as our problem!
Person (Katie) and Background
Background Subtraction Code
def swapbg(person, bg, newbg):
for x in range(1,getWidth(person)):
for y in range(1,getHeight(person)):
personPixel = getPixel(person,x,y)
bgpx = getPixel(bg,x,y)
personColor= getColor(personPixel)
bgColor = getColor(bgpx)
if distance(personColor,bgColor) < 10:
bgcolor = getColor(getPixel(newbg,x,y))
setColor(personPixel, bgcolor)
Putting Katie in a Jungle
But why isn’t it alot better?

We’ve got places where
we got pixels swapped
that we didn’t want to
swap


See Katie’s shirt stripes
We’ve got places where
we want pixels swapped,
but didn’t get them
swapped

See where Katie made a
shadow
Another way: Chromakey

Have a background of a known
color
Some color that won’t be
on the person you want to
mask out
 Pure green or pure blue is
most often used
 I used my son’s blue
bedsheet


This is how the weather people
seem to be in front of a map—
they’re actually in front of a
blue sheet.
Chromakey recipe
def chromakey(source,bg):
# source should have something in front of blue, bg is the new background
for x in range(1,getWidth(source)):
for y in range(1,getHeight(source)):
p = getPixel(source,x,y)
# My definition of blue: If the redness + greenness < blueness
if (getRed(p) + getGreen(p) < getBlue(p)):
#Then, grab the color at the same spot from the new background
setColor(p,getColor(getPixel(bg,x,y)))
Can also do this with getPixels()
def chromakey2(source,bg):
# source should have something in front of blue, bg is the new background
for p in getPixels(source):
# My definition of blue: If the redness + greenness < blueness
if (getRed(p) + getGreen(p) < getBlue(p)):
#Then, grab the color at the same spot from the new background
setColor(p,getColor(getPixel(bg,getX(p),getY(p))))
Example results
Sound Processing


Again, we’re going to use Python as a kind of
pseudo-code.
Goals:


Give you the basic understanding of audio
processing, including psycho-acoustics,
Identify some interesting examples to use,

Especially those that have the potential for going deep
into CS content.
Basic Sound Functions




makeSound(filename) creates and returns a sound
object, from the WAV file at the filename
play(sound) makes the sound play (but doesn’t
wait until it’s done)
blockingPlay(sound) waits for the sound to finish
We’ll learn more later like getSample and
setSample
How sound works:
Acoustics, the physics of sound

Sounds are waves of air
pressure
Sound comes in cycles
 The frequency of a wave is
the number of cycles per
second (cps), or Hertz



(Complex sounds have more
than one frequency in them.)
The amplitude is the
maximum height of the
wave
Volume and pitch:
Psychoacoustics, the psychology of
sound

Our perception of volume is related (logarithmically) to
changes in amplitude


If the amplitude doubles, it’s about a 3 decibel (dB)
change
Our perception of pitch is related (logarithmically) to
changes in frequency
Higher frequencies are perceived as higher pitches
 We can hear between 5 Hz and 20,000 Hz (20 kHz)
 A above middle C is 440 Hz

“Logarithmically?”

It’s strange, but our hearing works on ratios not
differences, e.g., for pitch.



We hear the difference between 200 Hz and 400 Hz,
as the same as 500 Hz and 1000 Hz
Similarly, 200 Hz to 600 Hz, and 1000 Hz to 3000 Hz
Intensity (volume) is measured as watts per meter
squared

A change from 0.1W/m2 to 0.01 W/m2, sounds the
same to us as 0.001W/m2 to 0.0001W/m2
Decibel is a logarithmic
measure

A decibel is a ratio between two intensities: 10 *
log10(I1/I2)




As an absolute measure, it’s in comparison to
threshold of audibility
0 dB can’t be heard.
Normal speech is 60 dB.
A shout is about 80 dB
Digitizing Sound: How do we
get that into numbers?


Remember in calculus,
estimating the curve by
creating rectangles?
We can do the same to
estimate the sound curve
Analog-to-digital conversion
(ADC) will give us the
amplitude at an instant as a
number: a sample
 How many samples do we
need?

Nyquist Theorem


We need twice as many samples as the maximum
frequency in order to represent (and recreate, later) the
original sound.
The number of samples recorded per second is the
sampling rate

If we capture 8000 samples per second, the highest
frequency we can capture is 4000 Hz


That’s how phones work
If we capture more than 44,000 samples per second, we
capture everything that we can hear (max 22,000 Hz)

CD quality is 44,100 samples per second
Digitizing sound in the computer


Each sample is stored as a number (two bytes)
What’s the range of available combinations?
16 bits, 216 = 65,536
 But we want both positive and negative values


To indicate compressions and rarefactions.
What if we use one bit to indicate positive (0) or negative
(1)?
 That leaves us with 15 bits
 15 bits, 215 = 32,768
 One of those combinations will stand for zero



We’ll use a “positive” one, so that’s one less pattern for positives
Each sample can be between -32,768 and 32,767
Working with sounds

We’ll use pickAFile and makeSound as we have before.





But now we want .wav files
We’ll use getSamples to get all the sample objects out of
a sound
We can also get the value at any index with
getSampleValueAt
Sounds also know their length (getLength) and their
sampling rate (getSamplingRate)
Can save sounds with writeSoundTo(sound,”file.wav”)
Recipe to Increase the Volume
def increaseVolume(sound):
for sample in getSamples(sound):
value = getSample(sample)
setSample(sample,value * 2)
Using it:
>>> f="/Users/guzdial/mediasources/gettysburg10.wav"
>>> s=makeSound(f)
>>> increaseVolume(s)
>>> play(s)
>>> writeSoundTo(s,"/Users/guzdial/mediasources/louder-g10.wav")
Decreasing the volume
def decreaseVolume(sound):
for sample in getSamples(sound):
value = getSample(sample)
setSample(sample,value * 0.5)
This works just like
increaseVolume, but
we’re lowering each
sample by 50% instead of
doubling it.
Maximizing volume


How do we get maximal volume?
It’s a three-step process:


First, figure out the loudest sound (largest sample).
Next, figure out a multiplier needed to make that
sound fill the available space.
We want to solve for x where x * loudest = 32767
 So, x = 32767/loudest


Finally, multiply the multiplier times every sample
Maxing (normalizing) the sound
def normalize(sound):
largest = 0
for s in getSamples(sound):
largest = max(largest,getSample(s) )
multiplier = 32767.0 / largest
print "Largest sample value in original sound was", largest
print "Multiplier is", multiplier
for s in getSamples(sound):
louder = multiplier * getSample(s)
setSample(s,louder)
Could also do this with IF
def normalize(sound):
largest = 0
for s in getSamples(sound):
if getSample(s) > largest:
largest = getSample(s)
multiplier = 32767.0 / largest
print "Largest sample value in original sound was", largest
print "Multiplier is", multiplier
for s in getSamples(sound):
louder = multiplier * getSample(s)
setSample(s,louder)
Increasing volume by sample
index
def increaseVolumeByRange(sound):
for sampleIndex in range(1,getLength(sound)+1):
value = getSampleValueAt(sound,sampleIndex)
setSampleValueAt(sound,sampleIndex,value * 2)
This really is the same as:
def increaseVolume(sound):
for sample in getSamples(sound):
value = getSample(sample)
setSample(sample,value * 2)
Recipe to play a sound
backwards
def backwards(filename):
source = makeSound(filename)
target = makeSound(filename)
sourceIndex = getLength(source)
for targetIndex in range(1,getLength(target)+1):
sourceValue = getSampleValueAt(source,sourceIndex)
setSampleValueAt(target,targetIndex,sourceValue)
sourceIndex = sourceIndex - 1
return target
Note use of return for
returning the processed sound
Recipe for halving the
frequency of a sound
def half(filename):
source = makeSound(filename)
target = makeSound(filename)
This is how a
sampling synthesizer
works!
sourceIndex = 1
for targetIndex in range(1, getLength( target)+1):
setSampleValueAt( target, targetIndex,
getSampleValueAt( source, int(sourceIndex)))
sourceIndex = sourceIndex + 0.5
Here are the
play(target)
return target
pieces that
do it
Compare these two
def half(filename):
source = makeSound(filename)
target = makeSound(filename)
sourceIndex = 1
for targetIndex in range(1, getLength( target)+1):
setSampleValueAt( target, targetIndex,
getSampleValueAt( source,
int(sourceIndex)))
sourceIndex = sourceIndex + 0.5
play(target)
return target
Here’s where we start to
emphasize algorithms, apart
from any particular
implementation (even
medium)
def copyBarbsFaceLarger():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
sourceX = 45
for targetX in range(100,100+((200-45)*2)):
sourceY = 25
for targetY in range(100,100+((200-25)*2)):
color = getColor(
getPixel(barb,int(sourceX),int(sourceY)))
setColor(getPixel(canvas,targetX,targetY), color)
sourceY = sourceY + 0.5
sourceX = sourceX + 0.5
show(barb)
show(canvas)
return canvas
Both of them are sampling

Both of them have three parts:


A start where objects are set up
A loop where samples or pixels are copied from one
place to another
To decrease the frequency or the size, we take each
sample/pixel twice
 In both cases, we do that by incrementing the index by
0.5 and taking the integer of the index


Finishing up and returning the result
Recipe to double the frequency
of a sound
Here’s the critical
piece: We skip
every other
sample in the
source!
def double(filename):
source = makeSound(filename)
target = makeSound(filename)
targetIndex = 1
for sourceIndex in range(1, getLength(source)+1, 2):
setSampleValueAt( target, targetIndex,
getSampleValueAt( source, sourceIndex))
targetIndex = targetIndex + 1
#Clear out the rest of the target sound -- it's only half full!
for secondHalf in range( getLength( target)/2, getLength( target)):
setSampleValueAt(target,targetIndex,0)
targetIndex = targetIndex + 1
play(target)
return target
Splicing Sounds


Splicing gets its name from literally cutting and
pasting pieces of magnetic tape together
Doing it digitally is easy, but not short



We find where the end points of words are
We copy the samples into the right places to make
the words come out as we want them
(We can also change the volume of the words as we
move them, to increase or decrease emphasis and
make it sound more natural.)
Finding the word end-points

Using MediaTools and
play before/after cursor,
can figure out the index
numbers where each word
ends
Now, it’s all about copying

We have to keep track of the source and target
indices.
targetIndex = Where-the-incoming-sound-should-start
for sourceIndex in range(startingPoint,endingPoint)
setSampleValueAt( target, targetIndex,
getSampleValueAt( source, sourceIndex))
targetIndex = targetIndex + 1
The Whole Splice
def splicePreamble():
file = "/Users/guzdial/mediasources/preamble10.wav"
source = makeSound(file)
target = makeSound(file) # This will be the newly spliced sound
targetIndex=17408
# targetIndex starts at just after "We the" in the new sound
for sourceIndex in range( 33414, 40052): # Where the word "United" is in the sound
setSampleValueAt(target, targetIndex, getSampleValueAt( source, sourceIndex))
targetIndex = targetIndex + 1
for sourceIndex in range(17408, 26726): # Where the word "People" is in the sound
setSampleValueAt(target, targetIndex, getSampleValueAt( source, sourceIndex))
targetIndex = targetIndex + 1
for index in range(1,1000):
#Stick some quiet space after that
setSampleValueAt(target, targetIndex,0)
targetIndex = targetIndex + 1
play(target)
#Let's hear and return the result
return target
What’s going on here?


First, set up a source and target.
Next, we copy “United” (samples 33414 to
40052) after “We the” (sample 17408)
That means that we end up at
17408+(40052-33414) =
17408+6638=24046
 Where does “People” start?


Next, we copy “People” (17408 to 26726)
immediately afterward.
Do we have to copy “of” to?
 Or is there a pause in there that we can
make use of?


Finally, we insert a little (1/441-th of a second)
of space – 0’s
What if we didn’t do that second
copy? Or the pause?
def spliceSimpler():
file = "C:\Documents and Settings\Mark Guzdial\My
Documents\mediasources\preamble10.wav"
source = makeSound(file)
target = makeSound(file) # This will be the newly spliced sound
targetIndex=17408
# targetIndex starts at just after "We the" in the new
sound
for sourceIndex in range( 33414, 40052): # Where the word "United" is in the
sound
setSampleValueAt( target, targetIndex, getSampleValueAt( source,
sourceIndex))
targetIndex = targetIndex + 1
play(target)
#Let's hear and return the result
return target
Can we generalize shifting a
sound into other frequencies?
def shift(filename,factor):
source = makeSound(filename)
target = makeSound(filename)
sourceIndex = 1
for targetIndex in range(1, getLength( target)+1):
setSampleValueAt( target, targetIndex,
getSampleValueAt( source, int(sourceIndex)))
sourceIndex = sourceIndex + factor
if sourceIndex > getLength(source):
sourceIndex = 1
play(target)
return target
Now we have the basics of a
sampling synthesizer
For a desired frequency f we want a sampling interval like
this:
Useful exercise: Build a shift function that takes a
frequency as input.
Making more complex sounds



We know that natural sounds are often the
combination of multiple sounds.
Adding waves in physics or math is hard.
In computer science, it’s easy! Simply add the
samples at the same index in the two waves:
for sourceIndex in range(1,getLength(source)+1):
targetValue=getSampleValueAt(target,sourceIndex)
sourceValue=getSampleValueAt(source,sourceIndex)
setSampleValueAt(source,sourceIndex,sourceValue+targetValue)
Uses for adding sounds

We can mix sounds



We even know how to change the volumes of the
two sounds, even over time (e.g., fading in or fading
out)
We can create echoes
We can add sine (or other) waves together to
create kinds of instruments/sounds that never
existed in nature, but sound complex
A function for adding two sounds
def addSounds(sound1,sound2):
for index in range(1,getLength(sound1)+1):
s1Sample = getSampleValueAt(sound1,index)
s2Sample = getSampleValueAt(sound2,index)
setSampleValueAt(sound2,index,s1Sample+s2Sample)
Notice that this adds sound1 into sound2
Making a chord by mixing three
notes
>>> setMediaFolder()
New media folder: C:\Documents and Settings\Mark Guzdial\My
Documents\mediasources\
>>> getMediaPath("bassoon-c4.wav")
'C:\\Documents and Settings\\Mark Guzdial\\My
Documents\\mediasources\\bassoon-c4.wav'
>>> c4=makeSound(getMediaPath("bassoon-c4.wav"))
>>> e4=makeSound(getMediaPath("bassoon-e4.wav"))
>>> g4=makeSound(getMediaPath("bassoon-g4.wav"))
>>> addSounds(e4,c4)
>>> play(c4)
>>> addSounds(g4,c4)
>>> play(c4)
Adding sounds with a delay
Note that in this
def makeChord(sound1,sound2,sound3):
for index in range(1,getLength(sound1)):
version we’re
s1Sample = getSampleValueAt(sound1,index)
adding into
if index > 1000:
sound1!
s2Sample=getSampleValueAt(sound2,index-1000)
setSampleValueAt(sound1,index,s1Sample+s2Sample)
if index > 2000:
s3Sample = getSampleValueAt(sound3,index-2000)
setSampleValueAt(sound1,index,s1Sample + s2Sample + s3Sample)
-Add in sound2 after 1000 samples
-Add in sound3 after 2000 samples
How the original sound
synthesizers worked

What if we added pure sine waves?
We can generate a sound that is just a single tone (see
the book)
 We can then add them together (perhaps manipulating
their volume) to create sounds that don’t exist in nature


Don’t have to use just sine waves
Waves that are square or triangular (seriously!) can be
heard and have interesting dynamics
 We can add together waves of lots of types to create
unique sounds that can’t be created by physical
instruments


We call this additive synthesis

Additive synthesis as-is isn’t used much anymore
Adding envelopes

Most real synthesizers today also allow you to manipulate
envelopes
An envelope is a definition of how quickly the aspects of
the sound change over time
 For example, the rise in volume (attack), how the volume
is sustained over time (sustain), how quickly the sound
decays (decay): The ASD envelope



Pianos tend to attack quickly, then decay quickly (without
pedals)
Flutes tend to attack slowly and sustain as long as you
want.
Wide world of music synthesis


What if we wrapped an envelope around every
sine wave? That’s closer to how real instruments
work.
Can also use other techniques such as FM
Synthesis and Subtractive synthesis


FM Synthesis controls (modulates) frequencies with
other frequencies creating richer sounds
Subtractive synthesis starts from noise and filters
out undesired sounds.
What is MP3?


MP3 files are files encoded according to the MPEG-3
standard.
They are audio files, but they are compressed in special
ways.

They use a model of how we hear to get rid of some of
the sound.



If there is a soft sound at the same time as a loud sound, don’t
record the soft sound
They use various compression techniques to make the
sound smaller.
WAV files are compressed, but not as much, and don’t use
any smart models to make themselves smaller.
What is MIDI?

MIDI is a standard for encoding music, not sound.



MIDI literally encodes “For this instrument (track), turn
key #42 on” then later “For this instrument (track), turn
key #31 off.”
The quality of the actual sound depends entirely on the
synthesizer—the quality of the instrument generation
(whether recorded or synthesized).
MIDI files tend to be very, very small.
Each MIDI instruction (“Play key #42 track 7”) is only
about five bytes long.
 Not thousands of bytes long.

Playing MIDI in JES


The function playNote allows you to play MIDI piano
with JES.
playNote takes three inputs:

A note number


Not a frequency—it’s literally the piano key number
C in the first octave is 1, C# is 2, C in the fourth octave is 60, D
in the fourth octave is 62.
A duration in milliseconds (1/1000 of a second)
 An intensity (0-127)


Literally, how hard the key is pressed
Example
def song():
playNote(60,200,127)
playNote(62,500,127)
playNote(64,800,127)
playNote(60,600,127)
for i in range(1,2):
playNote(64,120,127)
playNote(65,120,127)
playNote(67,60,127)
Video Processing

Video Processing is just processing a whole bunch
of JPEGs

Process each frame using os.listdir


Gets a list of individual frames
You can process existing movies by converting
them into a JPEG sequence.


Burst a movie into a series of JPEG frames
MediaTools or QuickTime Pro will do this for you.
Psychophysics of Movies:
Persistence of Vision



What makes movies work is yet another limitation of our
visual system:
Persistence of vision
We do not see every change that happens in the world
around us.
Instead, our eye retains an image (i.e., tells the brain
“This is the latest! Yup, this is still the latest!”) for a brief
period of time.

If this were not the case, you would be aware of every
time that your eye blinks because the world would “go
away” for a moment.
16 frames and it’s motion

If you see 16 separate pictures in one second, and
these pictures are logically sequenced,




That is, #2 could logically follow from the scene in
#1.
16 pictures of completely different things doesn’t
work,
You will perceive the pictures as being in motion.
16 frames per second (fps), 16 pictures in a
second, is the lower bound for the sensation of
motion.
Beyond 16 fps




Early silent pictures were 16 fps.
Motion picture standards shifted to 24 fps to make sound
smoother.
Videocameras (digital video) captures 30 fps
How high can we go?
Air force experiments suggest that pilots can recognize a
blurb of light in 1/200th of a second!
 Video game players say that they can discern a difference
between 30 fps and 60 fps.


Bottomlines:
Generate at least 16 fps and you provide a sense of
motion.
 If you want to process video, you’re going to have 30 fps
to process (unless it’s been modified elsewhere for you.)

Simple Motion
def movingRectangle(directory):
for frame in range(1,100): #99 frames
canvas = makePicture(getMediaPath("640x480.jpg"))
if frame < 50: #Less than 50, move down
# Generate new positions each frame number
addRectFilled(canvas,frame*10,frame*5, 50,50,red)
if frame >= 50: #Greater than 50, move up
addRectFilled(canvas,(50-(frame-50))*10,(50-(frame-50))*5, 50,50,red)
# Now, write out the frame
# Have to deal with single digit vs. double digit frame numbers differently
framenum=str(frame)
if frame < 10:
writePictureTo(canvas,directory+"//frame0"+framenum+".jpg")
if frame >= 10:
writePictureTo(canvas,directory+"//frame"+framenum+".jpg")
A Few Frames
frame01.jpg
frame02.jpg
frame50.jpg
Can we move more than one thing
at once? Sure!
def movingRectangle2(directory):
for frame in range(1,100): #99 frames
canvas = makePicture(getMediaPath("640x480.jpg"))
if frame < 50: #Less than 50, move down
# Generate new positions each frame number
addRectFilled(canvas,frame*10,frame*5, 50,50,red)
if frame >= 50: #Greater than 50, move up
addRectFilled(canvas,(50-(frame-50))*10,(50-(frame-50))*5, 50,50,red)
# Let's have one just moving around
addRectFilled(canvas,100+ int(10 * sin(frame)),4*frame+int(10*
cos(frame)),50,50,blue)
# Now, write out the frame
# Have to deal with single digit vs. double digit frame numbers differently
framenum=str(frame)
if frame < 10:
writePictureTo(canvas,directory+"//frame0"+framenum+".jpg")
if frame >= 10:
writePictureTo(canvas,directory+"//frame"+framenum+".jpg")
addRectFilled(canvas,100+ int(10 * sin(frame)),
4*frame+int(10* cos(frame)),50,50,blue)

What’s going on here?




Remember that both sine and cosine vary between
+1 and -1.
Int(10*sin(frame)) will vary between -10 and +10
With cosine controlling y and sine controlling x,
should create circular motion
frame=1


x is 108, y is 9
frame=2

x is 109, y is 4
Frames from two motions at once
Moving something else:
Remember this?
def copyBarbsFaceSmaller():
# Set up the source and target pictures
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
canvasf = getMediaPath("7inX95in.jpg")
canvas = makePicture(canvasf)
# Now, do the actual copying
sourceX = 45
for targetX in range(100,100+((200-45)/2)):
sourceY = 25
for targetY in range(100,100+((200-25)/2)):
color = getColor(getPixel(barb,sourceX,sourceY))
setColor(getPixel(canvas,targetX,targetY), color)
sourceY = sourceY + 2
sourceX = sourceX + 2
show(barb)
show(canvas)
return canvas
To move Barb’s face
around, we have to
do this for each
frame, moving the
target each time.
Moving Barb’s head
def moveahead(directory):
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
for frame in range(1,100): #99 frames
printNow("Frame number: "+str(frame))
canvas = makePicture(getMediaPath("640x480.jpg"))
# Now, do the actual copying
sourceX = 45
for targetX in range(frame*3,frame*3+((200-45)/2)):
sourceY = 25
for targetY in range(frame*3,frame*3+((200-25)/2)):
color = getColor(getPixel(barb,int(sourceX),int(sourceY)))
setColor(getPixel(canvas,targetX,targetY), color)
sourceY = sourceY + 2
sourceX = sourceX + 2
# Now, write out the frame
# Have to deal with single digit vs. double digit frame numbers differently
framenum=str(frame)
if frame < 10:
writePictureTo(canvas,directory+"//frame0"+framenum+".jpg")
if frame >= 10:
writePictureTo(canvas,directory+"//frame"+framenum+".jpg")
moveahead(r"C:\Documents and Settings\Mark Guzdial\My
Documents\mediasources\barbshead")
My, isn’t that gorey!



Can’t we make it easier to read?
Can we just deal with the parts that we care
about?
Maybe we could use sub-functions?

At least for the writing out of the frame.
Using subfunctions
def moveahead(directory):
barbf=getMediaPath("barbara.jpg")
barb = makePicture(barbf)
for frame in range(1,100): #99 frames
printNow("Frame number: "+str(frame))
canvas =
makePicture(getMediaPath("640x480.jpg"))
# Now, do the actual copying
sourceX = 45
for targetX in range(frame*3,frame*3+((20045)/2)):
sourceY = 25
for targetY in range(frame*3,frame*3+((20025)/2)):
color =
getColor(getPixel(barb,int(sourceX),int(sourceY
)))
setColor(getPixel(canvas,targetX,targetY),
color)
sourceY = sourceY + 2
sourceX = sourceX + 2
# Now, write out the frame
writeFrame(frame,directory,canvas)
def writeFrame(num,directory,framepict):
# Have to deal with single digit vs.
double digit frame numbers differently
framenum=str(num)
if num < 10:
writePictureTo(framepict,directory+"//fr
ame0"+framenum+".jpg")
if num >= 10:
writePictureTo(framepict,directory+"//fr
ame"+framenum+".jpg")
What if we have over 100 frames?
def writeFrame(num,directory,framepict):
# Have to deal with single digit vs. double digit frame numbers differently
framenum=str(num)
if num < 10:
writePictureTo(framepict,directory+"//frame00"+framenum+".jpg")
if num >= 10 and num<100:
writePictureTo(framepict,directory+"//frame0"+framenum+".jpg")
if num >= 100:
writePictureTo(framepict,directory+"//frame0"+framenum+".jpg")
This will work with
moveahead() and other
functions—it’s generally
useful
Using real photographs



Of course, we can use any real photographs we
want.
We can use any of the techniques we’ve learned
previously for manipulating the photographs.
Even more, we can use the techniques in new
ways to explore a range of effects.
Slowly making it (very) sunset
Remember this code?
def makeSunset(picture):
for p in getPixels(picture):
value=getBlue(p)
setBlue(p,value*0.7)
value=getGreen(p)
setGreen(p,value*0.7)
 What if we applied this to create frames of a movie, but
slowly increased the sunset effect?

SlowSunset
Just one canvas
repeatedly being
manipulated
def slowsunset(directory):
canvas = makePicture(getMediaPath("beach-smaller.jpg")) #outside the loop!
for frame in range(1,100): #99 frames
printNow("Frame number: "+str(frame))
makeSunset(canvas)
# Now, write out the frame
writeFrame(frame,directory,canvas)
Not showing you
def makeSunset(picture):
for p in getPixels(picture):
value=getBlue(p)
setBlue(p,value*0.99) #Just 1% decrease!
value=getGreen(p)
setGreen(p,value*0.99)
writeFrame()
because you
know how that
works.
SlowSunset frames
Fading by background subtraction
Remember background subtraction?
def swapbg(person, bg, newbg,threshold):
One change here is
for x in range(1,getWidth(person)):
that the threshold is
for y in range(1,getHeight(person)):
personPixel = getPixel(person,x,y)
now an input.
bgpx = getPixel(bg,x,y)
personColor= getColor(personPixel)
bgColor = getColor(bgpx)
if distance(personColor,bgColor) < threshold:
bgcolor = getColor(getPixel(newbg,x,y))
setColor(personPixel, bgcolor)

Use the frame number as the
threshold
def slowfadeout(directory):
bg = makePicture(getMediaPath("wall.jpg"))
jungle = makePicture(getMediaPath("jungle2.jpg"))
for frame in range(1,100): #99 frames
canvas = makePicture(getMediaPath("wall-two-people.jpg")) #outside
the loop!
printNow("Frame number: "+str(frame))
swapbg(canvas,bg,jungle,frame)
# Now, write out the frame
writeFrame(frame,directory,canvas)
SlowFadeout
Download