Chapter 13: Creating and Modifying Movies Chapter Objectives Movies, animations, and video …oh my! We’re going to refer generically to captured (recorded) motion as “movies.” This includes motion entirely generated by graphical drawings, which are normally called animations. This also includes motion generated by some kind of photographic process, normally called video. Psychophysics of Movies: Persistence of Vision What makes movies work is yet another limitation of our visual system: Persistence of vision We do not see every change that happens in the world around us. Instead, our eye retains an image (i.e., tells the brain “This is the latest! Yup, this is still the latest!”) for a brief period of time. If this were not the case, you would be aware of every time that your eye blinks because the world would “go away” for a moment. 16 frames and it’s motion If you see 16 separate pictures in one second, and these pictures are logically sequenced, That is, #2 could logically follow from the scene in #1. 16 pictures of completely different things doesn’t work, You will perceive the pictures as being in motion. 16 frames per second (fps), 16 pictures in a second, is the lower bound for the sensation of motion. Beyond 16 fps Early silent pictures were 16 fps. Motion picture standards shifted to 24 fps to make sound smoother. Videocameras (digital video) captures 30 fps How high can we go? Air force experiments suggest that pilots can recognize a blurb of light in 1/200th of a second! Video game players say that they can discern a difference between 30 fps and 60 fps. Bottomlines: Generate at least 16 fps and you provide a sense of motion. If you want to process video, you’re going to have 30 fps to process (unless it’s been modified elsewhere for you.) Processing movies Our frames are going to be JPEG pictures. One JPEG file per frame. So, if we’re going to be processing movies, we’re going to generating or processing sequences of JPEG files. Three tools for manipulating movies <-> JPEGs MediaTools QuickTime Pro (free QuickTime won’t do it) Windows Movie Maker (for converting image sequences to movies) Using MediaTools To generate a series of frame pictures in a folder from an MPEG file. To play a folder of frame pictures and to save it as a JMV file. (JPEG Movie format.) To play JMV or MPEG movies. What the other tools can do QuickTime Pro (http://www.apple.com/quicktime) can read a sequence of JPEG images and produce MPEG, AVI, or QuickTime movies. Windows Movie Maker can create WMV (Windows Media Player movies) from image sequences. ImageMagick (open source toolkit) can also read a sequence of JPEG images and produce MPEG movies. QuickTime Pro: Making a Movie From Images • Open an image sequence • Choose the first image in the sequence. • Specify a frame rate • POOF! You get a movie! QuickTime Pro: Make images from movie Choose “Export” from File menu. Choose as Image Sequence. Click “Options” to choose image format (PNG, JPEG) and frames per second. This will save a numbered sequence of images. Windows Movie Maker: Making a movie from images Free with most Windows installations. Choose “Import Pictures” and select all the images in your sequence. Windows Movie Maker: Creating the Movie Set the “Options” (Tools menu) so that there is a small duration between pictures. Drag all the pictures into the timeline. Play and export your movie! MPEG? QuickTime? AVI? JMV? MPEG, QuickTime, and AVI are compressed movie formats. They don’t record every frame. Rather, they record some key frames, and then store data about what parts of the screen change on intervening frames. MPEG is an international standard, from the same people who invented JPEG. AVI is a Microsoft standard. QuickTime is an Apple standard. JMV is a file consisting of JPEG frames in an array. All frames represented Why do we compress movies? Do the math: One second of 640x480 pixels at 30 fps 30 (frames) * 640 * 480 (pixels) = 9,216,000 pixels With 3 bytes of color per pixel, that’s 27,648,000 bytes or 27 megabytes of information per second. For a 90 minute feature movie (short), that’s 90 * 60 * 27,648,000 = 149,299,200,000 bytes (149 gigabytes) A DVD stores 6.47 gigabytes of data. So even on a DVD, the movie is compressed. MPEG movie = MPEG frames plus MP3 soundtrack An MPEG movie is actually a series of MPEG frames composed with an MP3 soundtrack. It’s literally two files stuck together in one. We’re not going to deal with sound movies for now. The real challenge in doing movie processing is generating and manipulating frames. Get the frames in order Many tools (including os.listdir()) can process frames in order if the order is specified. We specify the order by encoding the number of the frame into the name. If you put in leading zeroes so that everything is the same length, the order is alphabetical as well as numerical. Movies in JES makeMovieFromInitialFile(firstFile) will create a movie object from the image sequence starting from that file. playMovie(movie) opens a movie player on the movie object. You can write out QuickTime or AVI movies from there. Simple Motion def makeRectMovie(directory ): for num in range (1 ,30): #29 frames (1 to 29) canvas = makeEmptyPicture (300 ,200) addRectFilled(canvas ,num * 10, num * 5, 50,50, red) # convert the number to a string numStr=str(num) if num < 10: writePictureTo(canvas ,directory+"\\ frame0"+numStr+".jpg") if num >= 10: writePictureTo(canvas ,directory+"\\ frame"+numStr+".jpg") movie = makeMovieFromInitialFile(directory+"\\ frame00.jpg"); return movie A Few Frames frame00.jpg frame02.jpg frame50.jpg Making and Playing the Movie >>> rectM = makeRectMovie("c:\\ Temp \\ rect") >>> playMovie(rectM) Important cool thing: You can draw past the end of the picture! addText, addRect, and the rest of the drawing tools will work even if you go beyond the edge of the drawing. Drawings will clip what can’t be seen in them, so you don’t get an array out of bounds error. This is a big deal, because it means that you don’t have to do complicated math to see when you’re past the end of the drawing. But only for the drawing functions. If you set pixels, you’re still on your own to stay in range. Making a tickertape def tickertape(directory,string): for num in range(1,100): #99 frames canvas = makeEmptyPicture(300,100) #Start at right, and move left addText(canvas,300-(num*10),50,string) # Now, write out the frame # Have to deal with single digit vs. double digit frame numbers differently numStr=str(num) if num < 10: writePictureTo(canvas,directory+"//frame0"+numStr+".jpg") if num >= 10: writePictureTo(canvas,directory+"//frame"+numStr+".jpg") Playing the tickertape movie Can we move more than one thing at once? Sure! def movingRectangle2(directory ): for num in range (1 ,30): #29 frames canvas = makeEmptyPicture (300 ,250) # add a filled rect moving linearly addRectFilled(canvas ,num*10,num*5, 50,50,red) # Let’s have one just moving around blueX = 100+ int (10 * sin(num)) blueY = 4*num+int (10* cos(num)) addRectFilled(canvas ,blueX ,blueY ,50,50, blue) # Now , write out the frame # Have to deal with single digit vs. double digit numStr=str(num) if num < 10: writePictureTo(canvas ,directory +"// frame0 "+ numStr +". jpg") if num >= 10: writePictureTo(canvas ,directory +"// frame "+ numStr +". jpg") Moving two things at once Moving a clip from a picture def moveHead(directory ): markF=getMediaPath("blue -mark.jpg") mark = makePicture(markF) head = clip(mark ,275 ,160 ,385 ,306) for num in range (1 ,30): #29 frames printNow("Frame number: "+str(num)) canvas = makeEmptyPicture (640 ,480) # Now , do the actual copying copy(head ,canvas ,num*10,num *5) # Now , write out the frame # Have to deal with frame # digits numStr=str(num) if num < 10: writePictureTo(canvas ,directory+"// frame0"+numStr+".jpg") if num >= 10: writePictureTo(canvas ,directory+"// frame"+numStr+".jpg") def clip(picture ,startX ,startY ,endX ,endY ): width = endX - startX + 1 height = endY - startY + 1 resPict = makeEmptyPicture(width ,height) resX = 0 for x in range(startX ,endX ): resY =0 # reset result y index for y in range(startY ,endY ): origPixel = getPixel(picture ,x,y) resPixel = getPixel(resPict ,resX ,resY) setColor(resPixel ,( getColor(origPixel ))) resY=resY + 1 resX=resX + 1 return resPict Clip90 function returns part of another picture. Using general copy() function we defined earlier. Moving around Mark’s head What if we have over 100 frames? def writeFrame(num,directory,framepict): # Have to deal with single digit vs. double digit frame numbers differently framenum=str(num) if num < 10: writePictureTo(framepict,directory+"//frame00"+framenum+".jpg") if num >= 10 and num<100: writePictureTo(framepict,directory+"//frame0"+framenum+".jpg") if num >= 100: writePictureTo(framepict,directory+"//frame0"+framenum+".jpg") This will make all our movie-making easier — it’s generally useful Rewriting moving Mark’s head def moveHead2(directory ): markF=getMediaPath("blue -mark.jpg") This code is much easier mark = makePicture(markF) to read and understand face = clip(mark ,275 ,160 ,385 ,306) with the subfunctions. for num in range (1 ,30): #29 frames printNow("Frame number: "+str(num)) canvas = makeEmptyPicture (640 ,480) # Now , do the actual copying copy(face ,canvas ,num*10,num *5) # Now , write out the frame writeFrame(num ,directory ,canvas) Using real photographs Of course, we can use any real photographs we want. We can use any of the techniques we’ve learned previously for manipulating the photographs. Even more, we can use the techniques in new ways to explore a range of effects. Slowly making it (very) sunset Remember this code? What if we applied this to create frames of a movie, but slowly increased the sunset effect? def makeSunset(picture): for p in getPixels(picture): value=getBlue(p) setBlue(p,value*0.7) value=getGreen(p) setGreen(p,value*0.7) SlowSunset Just one canvas repeatedly being manipulated def slowsunset(directory): canvas = makePicture(getMediaPath("beach-smaller.jpg")) #outside the loop! for frame in range(0,100): #99 frames printNow("Frame number: "+str(frame)) makeSunset(canvas) # Now, write out the frame Not showing you writeFrame(frame,directory,canvas) writeFrame() because you know how that def makeSunset(picture): works. for p in getPixels(picture): value=getBlue(p) setBlue(p,value*0.99) #Just 1% decrease! value=getGreen(p) setGreen(p,value*0.99) SlowSunset frames Fading by background subtraction def swapbg(person, bg, newbg,threshold): for x in range(1,getWidth(person)): Remember background for y in range(1,getHeight(person)): subtraction? personPixel = getPixel(person,x,y) One change here is that bgpx = getPixel(bg,x,y) the threshold is now an personColor= getColor(personPixel) input. bgColor = getColor(bgpx) if distance(personColor,bgColor) < threshold: bgcolor = getColor(getPixel(newbg,x,y)) setColor(personPixel, bgcolor) Use the frame number as the threshold def slowfadeout(directory): bg = makePicture(getMediaPath("wall.jpg")) jungle = makePicture(getMediaPath("jungle2.jpg")) for frame in range(0,100): #99 frames canvas = makePicture(getMediaPath("wall-two-people.jpg")) printNow("Frame number: "+str(frame)) swapbg(canvas,bg,jungle,frame) # Now, write out the frame writeFrame(frame,directory,canvas) SlowFadeout Different images, with subfunctions def swapBack(pic1 , back , newBg , threshold ): for x in range(0, getWidth(pic1 )): for y in range(0, getHeight(pic1 )): p1Pixel = getPixel(pic1 ,x,y) backPixel = getPixel(back ,x,y) if (distance(getColor(p1Pixel),getColor(bac kPixel )) < threshold ): setColor(p1Pixel ,getColor(getPixel(newBg ,x,y))) return pic1 def slowFadeout(directory ): origBack = makePicture(getMediaPath("bgframe.jp g")) newBack = makePicture(getMediaPath("beach.jpg") ) for num in range (1 ,60): #59 frames # do this in the loop kid = makePicture(getMediaPath("kid in -frame.jpg")) swapBack(kid ,origBack ,newBack ,num) # Now , write out the frame writeFrame(num ,directory ,kid) Cool effect! Dealing with real video We really can’t deal with live video. Dealing with each frame takes a lot of processing. If you were going to process each frame as fast as it was coming in (or going out), you’d have 1/30th of a second to process each frame! We cheat by Saving each frame as a JPEG image Processing the JPEG images Convert the frames back to a movie The original kid-in-bg-seq movie Let’s have Mommy “watching” We’ll paste Barb’s head into each frame. We’ll use os.listdir to process all the frames of the kid sequence. MommyWatching import os def mommyWatching(directory): kidDir="C:/ip-book/mediasources/kid-in-bg-seq" barbF=getMediaPath("barbaraS.jpg") barb = makePicture(barbF) We process each frame, and copy face = clip(barb ,22 ,9 ,93 ,97) Mommy’s head to the frame, just like we animated in a line before num = 0 onto a blank canvas. for file in os.listdir(kidDir ): if file.endswith(".jpg"): num = num + 1 printNow("Frame number: "+str(num)) framePic = makePicture(kidDir+“/"+file) # Now , do the actual copying copy(face ,framePic ,num*3,num *3) # Now , write out the frame writeFrame(num ,directory ,framePic) MommyWatching Lightening a picture I took some video of a puppet show in black light. Very hard to see the puppets. Your eye can pick them up, but the camera can’t. Recall earlier discussion: Your eye can detect luminance changes that no media can replicate. Dark-fish2 sequence How I did the processing First try, lighten every pixel. Didn’t work. Made all the black whiter as well as the colors No improvement in contrast Second try, explore under MediaTools first Black parts are really black Lighter parts have really low number values So: Look for any pixel less black than black (threshold=8) Lighten it a couple values Lightenfish import os def lightenFish(directory): framenum = 0 for framefile in os.listdir(getMediaPath("dark-fish2")): framenum = framenum + 1 printNow("Frame: "+str(framenum)) if framefile.endswith(".jpg"): frame=makePicture(getMediaPath("dark-fish2")+"//"+framefile) for p in getPixels(frame): color = getColor(p) if distance(color,black)>8: color=makeLighter(color) color=makeLighter(color) setColor(p,color) writeFrame(framenum,directory,frame) Original sequence again Same frames after lightening Putting kids on the moon Took a video of our kids crawling past a blue sheet. Unfortunately, did it in front of electric light, not daylight. Not really blue. If you chromakey against black, pants and eyeballs go away. Code for putting kids on moon import os def kidsOnMoon(directory ): kids="C://ip-book//mediasources//kids-blue" moon=getMediaPath("moon-surface.jpg") back=makePicture(moon) num = 0 for frameFile in os.listdir(kids): num = num + 1 printNow("Frame: "+str(num)) if frameFile.endswith(".jpg"): frame=makePicture(kids+"//"+frameFile) for p in getPixels(frame ): if distance(getColor(p),black) <= 100: setColor(p,getColor(getPixel(back ,getX(p),getY(p)))) writeFrame(num ,directory ,frame) Making underwater movies look better Before: Water filters out red and yellow light. We can color-correct underwater footage by increasing red and green. After: Code for fixing underwater footage import os def changeRedAndGreen(pict ,redFactor ,greenFactor ): for p in getPixels(pict ): Creating a useful function setRed(p,int(getRed(p) * redFactor )) to make the task easier. setGreen(p,int(getGreen(p) * greenFactor )) def fixUnderwater(directory ): num = 0 dir="C://ip -book//mediasources //fish" for frameFile in os.listdir(dir): num = num + 1 printNow("Frame: "+str(num)) if frameFile.endswith(".jpg"): frame=makePicture(dir+"//"+frameFile) changeRedAndGreen(frame ,2.0 ,1.5) writeFrame(num ,directory ,frame) Building an effect from the bottom up Notice that the underwater footage code was made cleaner and clearer through use of an extra, “helper” function. Made the main function easier to read and shorter to write. We can build visual effects “bottom-up” by building helper functions first, then assembling them all. Drawing with light Many commercials feature actors “drawing” with light. Light beams that seem to hang in the air. How could we do that? Our Algorithm The light should create high luminance pixels. From frame 1, for each pixel of high luminance, copy the color to frame 2. 1. Now frame 2 contains the high luminance from frame 1 and from frame 2 2. Go on to frame 2 and 3, and back to step 1. Each frame now contains the “trace” of light from all the previous frames. Input Having my kids draw in darkness (to make sure luminance difference is large) with flashlights and light sticks. What do we need? First step: Compute luminance def luminance(apixel ): return (getRed(apixel )+ getGreen(apixel )+ getBlue(apixel ))/3.0 Test the pieces As we build each piece, we test it. You don’t want to build more on top of it until you know this works! We make a small picture so that we can a pixel to known colors and check its luminance. >>> pict = makeEmptyPicture (1,1) >>> pixel=getPixelAt(pict ,0 ,0) >>> white Color (255 , 255, 255) >>> setColor(pixel ,white) >>> luminance(pixel) 255.0 >>> black Color(0, 0, 0) >>> setColor(pixel ,black) >>> luminance(pixel) 0.0 Is that bright enough? def brightPixel(apixel , threshold=100): if luminance(apixel) > threshold: return true return false This could also be written: def brightPixel(apixel , threshold=100): return luminance(apixel) > threshold Using a Python feature that allows you to specify an optional parameter with a default value. We can specify a threshold, but if we don’t, it will be 100. Testing our brightness function >>> red Color (255 , 0, 0) >>> setColor(pixel ,red) >>> luminance(pixel) 85.0 >>> brightPixel(pixel) 0 >>> brightPixel(pixel ,80) 1 >>> brightPixel(pixel ,threshold =80) 1 >>> setColor(pixel ,white) >>> brightPixel(pixel ,threshold =80) 1 >>> brightPixel(pixel) 1 >>> setColor(pixel ,black) >>> brightPixel(pixel ,threshold =80) 0 >>> brightPixel(pixel) 0 Walking through the list of files import os def allFiles(fromDir ): listFiles = os.listdir(fromDir) listFiles.sort () return listFiles def firstFile(filelist ): return filelist [0] def restFiles(filelist ): return filelist [1:] #returns after [1] Testing the file list functions >>> files = allFiles("/") >>> files [’Recycled ’, ’_314109_ ’, ’bin’, ’boot ’, ’cdrom ’, ’dev’, ’etc’, ’home ’, ’initrd ’, ’initrd.img’, ’initrd.img.old’, ’lib’, ’lost+found ’, ’media ’, ’mnt’, ’opt’, ’proc ’, ’root ’, ’sbin ’, ’srv’, ’sys’, ’tmp’, ’usr’, ’var’, ’vmlinuz ’, ’vmlinuz.old’] >>> firstFile(files) ’Recycled ’ >>> restFiles(files) [’_314109_ ’, ’bin’, ’boot ’, ’cdrom ’, ’dev’, ’etc’, ’home ’, ’initrd ’, ’initrd.img’, ’initrd.img.old’, ’lib’, ’lost+found ’, ’media ’, ’mnt’, ’opt’, ’proc ’, ’root ’, ’sbin ’, ’srv’, ’sys’, ’tmp’, ’usr’, ’var’, ’vmlinuz ’, ’vmlinuz.old’] Now, putting it all together! def brightCombine(fromDir ,target ): fileList = allFiles(fromDir) fromPictFile = firstFile(fileList) fromPict = makePicture(fromDir+fromPictFile) for toPictFile in restFiles(fileList ): printNow(toPictFile) # Copy all the high luminance colors from fromPict to toPict toPict = makePicture(fromDir+toPictFile) for p in getPixels(fromPict ): if brightPixel(p): c = getColor(p) setColor(getPixel(toPict ,getX(p),getY(p)),c) writePictureTo(toPict ,target+toPictFile) fromPict = toPict Final frame of a light-drawing movie Why? Why does movie processing take so long? Why does sound processing seem to go so fast? Why can Photoshop do these things faster than we can in Python? What makes software fast, or slow? Coming soon…