How it Works (3.29.15)

advertisement
FullAuto Process Automation – How it Works
The modern world really LOVES “gadgets”. There is a gadget for literally everything – from peeling apples to changing
light bulbs in hard to reach places, to removing lint from your sweater. The goal of nearly all these wonders of the modern age is to
save the user time and aggravation. A lint shaver for example, saves the user countless rounds of pulling those annoying and
embarrassing fuzz balls off of sweaters. Who can possibly denigrate such a “wonder”?
When is the last time YOU used one?
There are other issues of course that dilute the effectiveness of such a device. The shaver needs to be stored where it
can be easily retrieved when it is needed. It requires a battery that has the nasty habit of being dead right when you most need it. It
is far from a perfect instrument – works only with the “low hanging fruit” lint. The really embedded stuff still needs to be dug out by
fingernail. And finally, maybe it’s just plain easier to put the lint filled sweater back on the hanger, and grab the new sweater you
purchased last week instead.
“Computer process automation” has been and continues to be a lot like
the plethora of gadgets that litter every kitchen, bathroom closet and garage from
“sea to shining sea”. It sounds GREAT, rivaling the very best late night infomercials.
But often times, it’s just easier to put the lint filled sweater back on the hanger and
grab the new sweater instead. I’m reminded of the “in the egg scrambler” introduced
in the 70’s. It works great, but really, how hard is it to just break the egg into a bowl
and mix it with a fork? Not to mention that the fork and bowl are far easier to keep
clean than the wonder device – and no batteries are needed!
This is precisely why, in an age where computers can literally do “anything” – countless tasks continue to be performed
manually. When subjected to a cost-benefit analysis, the “gadget” (computer process automation) fails to deliver enough value to
justify the time and effort it takes to implement. For countless tasks, “computer process automation” has been and continues to be
more trouble and expense than it’s worth.
Yet there are gadgets that do deliver. And there are gadgets that once were mere novelties that are now ubiquitous. In the
early 1980’s, a cordless screwdriver was almost useless – expensive, bulky, heavy, low torque, short battery life and long recharge
times. Today they are used as much if not more than manual ones. Today, computer process automation is a lot like the early
cordless screwdriver. The technology for computer process automation currently available today is expensive, difficult to implement,
fragile and challenging to trouble-shoot and maintain, and on top of that it’s slow.
But all that is about to CHANGE. Today’s cordless screwdrivers are not just better than their 30 year old ancestors; they
are orders of magnitude better. For computer process automation software to really deliver on the promise of significantly reducing
manual activities and associated costs, the innovation has to be orders of magnitude better than what is now available.
After 15 years of intense development – FullAuto Process Automation Software is now ready deliver on that promise. It is
not just “better” than the state of the art currently available; it is orders of magnitude better.
To illustrate that, we’ll use an analogy. Consider for a moment, that there are two HUGE warehouses on opposite sides of
a very busy road. The warehouses represent two computers, and the road between them, the network that separates them. Imagine
that a shipment is to go from one warehouse to the other. Even though it is “only” across the street (literally), it is still easier to load
and use a truck than any other method of conveyance. The truck then has to leave its home facility, checking out at the exit gate;
and then crossing the street and entering the other facility, now checking in at their entry gate. Imagine that each load has to be
treated independently, inspected, itemized and unloaded, regardless of its relationships to other loads, or the fact that it comes
literally from across the street. That is a LOT of overhead, time and expense that somehow seems unnecessary given the close
proximity and similarity of the two complexes.
Modern computer systems interacting with each other programmatically over a network, work a lot like the analogy
presented. Security is of course very important, which is why the security exit and entry gates are discussed in the example. One
modern and widely used method for sending “instructions” and “data” (truck loads in our analogy) to other computers, is a software
utility called secure shell or SSH. SSH is a command line tool that is now available for all computer systems including the
Mainframe. Think of SSH as the truck from our example.
The black window on the box of the truck is what’s called a terminal emulator window. Just to illustrate how close to reality
we are with our analogy, consider that trucks load and unload cargo at terminals. We also load data into databases. We can
continue with the analogies – but you get the point.
A computer process is nothing more (when you break it down) than a series of commands. When an IT employee
engages in “work”, what they are really doing in essence is typing in and running commands. Think of commands, and the output
from commands, as the packages that are transported by the truck. In fact, at the network level, data is called “packets”. (But I
digress). Cost savings come into play, when you can get the computers themselves to run more commands, and IT employees
fewer commands. That is computer process automation defined as simply as it can be.
Unfortunately, we are all familiar with a rather sinister form of “computer process automation” that costs billions of dollars
a year to protect ourselves from – called computer viruses. Anti-virus software is a countervailing form of “computer process
automation” aimed at defeating the computer virus. So computer process automation is all around us every day and has been for
quite some time. So why are manual computer processes still such a prevalent practice in nearly all IT organizations the world over?
The answer is rather simple: computer process automation ... is hard. Writing a good computer virus is hard. Writing
good anti-virus software is just as hard, if not harder. Considering there are now billions of computer users worldwide, the number
of skilled virus writers is a tiny, TINY minority. The number of developers working on anti-virus software is also small. The people
engaged in these activities are some of the most skilled programmers on the planet. Very unfortunate for all of us of course and a
tremendous waste of resource and talent – but it is what it is.
The average IT employee is orders of magnitude less skilled than virus and anti-virus programmers. Kind of like the
difference between doctors and nurses. Again, unfortunate – but it is what it is. So if computer process automation is to become (by
an “order of magnitude”, of course) more widespread that it is today, then the software available to do it must be easy enough for
the average IT employee to master.
FullAuto software was created to precisely address that need. But before we look more closely at FullAuto itself, it would
be helpful to explore a couple reasons why computer process automation is so inherently difficult. The first one (sticking to a theme)
is naturally those wicked “virus writers”. Computer process automation is fundamentally hampered by the need for security.
Protections put in place to protect computers from nefarious “automation”, also unfortunately work against the successful
implementation of beneficial automation. Walls impede both sinners and saints. Barbed wire will entangle both the good and the
bad.
In the late 90s, Secure Shell (or SSH) arose out of the need for a secure means of successfully connecting to, and
interacting with a remote computer. With SSH, one can perform just about any activity on a remote system that one can perform on
their local computer.
Which leads us to the second reason computer process automation is hard. Common sense would have us ask this: if
SSH is secure and with it we can perform just about any activity on a remote system – then why not just automate the running of
commands over SSH? Problem solved? … Right?
There is just one small issue with finding the means of accomplishing that. Currently, no one has found a way to run
multiple serial commands, via a program, over a persistent SSH connection.
Huh? Care to explain that in English?
Let’s return to our truck and warehouse analogy. The truck starts out empty. Packages are loaded into the truck at the
base warehouse, the truck passes through two security check points, is unloaded at the destination warehouse, loaded again at the
destination warehouse (program output) and then again passes through two security check points again before it returns to the base
warehouse where it is unloaded and returned to empty status. Now imagine that the truck, instead of carrying multiple packages
loaded separately, instead carries only one big pallet – for that is actually closer to the way computer communication over SSH
takes place. The pallet can contain one item or multiple items. But it is still just one pallet. If the individual boxes were computer
commands, the “shrink wrap” that binds them together would be semi-colons.
= ls;hostname;id;uname
The problem with “packaging” commands with semi-colons is that all the output returned from all those commands will be
sent back to the “base” computer all appended together – with no clear separation. So even though with this method multiple
commands can be sent and run via SSH, this is rarely done because of the problem of sorting out and properly parsing the output
from those commands.
Note how the output from all four commands, appears in the terminal window completely lacking any kind of consistent and
identifiable separator. This is the core problem encountered when attempting to run multiple commands over a single SSH
connection. In actual practice, a new SSH connection is made for each single command – just to avoid this problem. So instead of a
pallet, imagine the truck discussed above making a completely new trip to the
destination warehouse for every package!!!
Imagine the truck has to go through security four times, just to deliver that single
package and return to base. Not very efficient, is it? Now imagine that a process you
wish to automate over SSH contains a hundred commands or more. This means a
hundred new “trips” and four hundred additional encounters with security. Are you
beginning to see why automation that spans computers is hard?
Programs attempting to use SSH for automation, often use an operating
system component called a “pseudo-terminal” or PTY. Pseudo-terminals have the
advantage of enabling a programmer to send individual commands over a persistent
connection, and retrieve output before sending another command. The problem
though, is precisely the same issue described above – there was no reliable way to
accurately separate the output from multiple commands. Many computer scientists have considered this an impossible problem to
solve. Consider the opinion of Randall L. Schwartz, expert programmer and author of numerous books and manuals, on the
perlmonks.org collaboration site:
I don't believe there's any way that a program that sits on a pty1 can ever distinguish where the data is coming
from. This is the blessing and curse of Unix I/O.
So, no, I think you're just gonna have to consider the various things to wait for, some of which will be error-like
messages coming at unexpected times.
Randall L. Schwartz, http://www.perlmonks.org/?node_id=28096
1
A pty is also known as a pseudo-terminal and is the principle channel through which software can attempt to emulate human interaction
with command shells. Pseudo-terminals are natively available on all Unix variants, but is available on Microsoft Windows only with additional
software – like the Open Source Cygwin Linux emulation layer available for free from Redhat. http://www.cygwin.com*
This problem is so vexing, that process automation innovators have developed some very expensive work-a-rounds.
Expensive both in terms of money and effort required. The most notorious is IBM’s Tivoli ESM (Enterprise Systems Management)
Software. Tivoli pre-existed the advent of SSH, and so far appears not threatened by it. It is what is known as client-server
architecture. A Tivoli program (or agent) is literally installed on every computer needed by a process. Each installation is a license
expense. Tivoli is challenging to install, maintain, program and successfully utilize. IT departments the world over have whole teams
of personnel whose entire positions are just managing Tivoli and its automated jobs in the organization. The following link provides a
good historical overview of IBM’s Tivoli: http://www.softpanorama.org/Admin/Tivoli/index.shtml
Another approach to process automation across multiple computers is the “process scheduler”. BMC Software’s flagship
product is CONTROL-M Enterprise Manager. Like IBM’s Tivoli, Control-M is client-server architecture. BMC Software grosses nearly
two billion a year, and employs nearly 5000 consultants – who work principally supporting Control-M. Control-M’s approach to
process automation is simply that of a “terminal agent”. It schedules individual “jobs” (or trucks if sticking with our analogy) to
execute at precise intervals. It is truly nothing more than a glorified “dispatcher” – a truly robust and full-featured dispatcher to be
sure, but still, at the end of the day, a mere “dispatcher”. The “trucks” it dispatches suffer all the problems and limitations discussed
above.
Finally, to complete our survey of competing automation technology approaches, we will examine a more recent entry into
the field – a product called “Chef”. Chef differs from Tivoli and Control-M in its efforts to cope with the “timing” and “roll-back”
problems. Again, a process is nothing more than a series of commands. But of course, it is never really that simple. There are
frequent cases where a command cannot execute successfully unless a previous command ran successfully. An example of this
being a command that reads a file. If that file doesn’t exist, the command will fail. Suppose the file it is to read, is dynamically
created by a different process run earlier in time. How is the new command, supposed to be sure that the file was successfully
created by the earlier process? In our analogy, how does the truck driver KNOW that an earlier delivery that his load depends on
was successfully delivered? In the real world, the truck driver would use a radio. The “Chef” software implements a kind of radio
check by using a database. Output from commands is written to a database, which is accessible by all “truck drivers” (or “command
wrappers” in computing parlance) – so that they can discover the status of earlier “dependent” loads. If they find out that the earlier
load was not successfully or completely delivered on time, the driver can delay his trip or cancel it until the situation with the earlier
load is resolved. With roll-back – imagine that three loads were delivered before the fourth delivery failed (the fourth truck had an
accident and the load was destroyed). Not only does driver five now cancel his trip, but the three earlier loads have to picked up
from the destination warehouse and returned to the base warehouse. That in a nutshell is “roll-back”, and it is a common need in
large organizations. When not deemed an absolute necessity, roll-back capability is still highly desired as it increases reliability, and
saves IT organizations time, effort and money. But … it’s hard – which is why it is used mainly in mission critical processes where
cost is not an issue.
FullAuto changes ALL this. ALL the problems listed above, are principally a failure to establish a persistent and secure
connection between two computers (warehouses in our analogy) over which multiple commands can run and return output in a
reliable fashion. Rather than trucks, what is actually desired is a connecting catwalk between the two warehouses:
With such a structure, passing packages back and forth between the two warehouses becomes as easy as moving
packages within one warehouse. It is not a fallacy to assert that with such a catwalk, the two warehouses essentially become ONE
BIG WAREHOUSE. By logical extension, two computers connected with a persistent SSH connection essentially become ONE BIG
COMPUTER.
FullAuto is the world’s FIRST and currently ONLY successful implementation of “computer process automation” software
utilizing persistent SSH connections over which multiple commands can be executed in a reliable fashion – without reliance on
remote host configuration. But hold on! – wasn’t it stated earlier that experts considered this an impossible problem to solve?
Indeed it was so assumed – but it turns out, after 15 years of intense development, software author Brian Kelly has
achieved the impossible. It turns out – the “experts” were WRONG, and FullAuto proves it! FullAuto is not a “theory”; it is not an
“idea on a napkin”. It is fully developed, fully tested, world class software ready to use TODAY. Recall that earlier it was said that:
For computer process automation software to really deliver on the promise of significantly reducing manual
activities and associated costs, the innovation has to be orders of magnitude better than what is now available.
FullAuto is indeed orders of magnitude better than any other computer process automation software currently available anywhere.
FullAuto does not require the skills of a virus author, or anti-virus developer. FullAuto can be successfully and efficiently used by IT
administration technicians with average script and batch file writing skills. This is important because scripts and batch files are used
by every IT organization everywhere. Every IT organization employs admin technicians – from one to thousands in number. Anyone
who has the skills to write a basic script or batch file is skilled enough to write instruction sets for FullAuto. FullAuto was created with
one of the world’s oldest and most respected scripting languages – Perl. In fact, it can be said that FullAuto is really a Perl
extension, that enables the Perl scripting language to be multi-computer capable.
Another reason FullAuto is orders of magnitude better, is the fact that FullAuto is open source. The Perl scripting
language is open source and royalty free. FullAuto is also royalty free, as are all the components that FullAuto utilizes. Compare this
with the three automation solutions discussed above – whose annual licensing fees run anywhere from thousands to tens of
thousands, to even hundreds of thousands of dollars!
Another reason FullAuto is orders of magnitude better, is the fact that FullAuto works just as a technician would – not like
a “program” does. Consider the four commands used in the earlier example:
A technician, running the exact same commands in the exact same command terminal emulator (or cmd) window, would look like
this:
Here is the code FullAuto would use to run the exact same commands:
Here is the output after FullAuto has run all those commands:
Note that the content of the output is the same – though the appearance differs slightly because of output formatting. This is a minor
difference of no consequence in terms of functionality. Simply change the size of the window, and the output format will change.
Keep in mind that FullAuto is dealing with raw data – with no terminal formatting for visual display to worry about. Also note that the
output from the different commands is separated by an extra blank line. The output is not all bunched together as in the earlier
example. The output from each separate command was returned successfully to the terminal BEFORE the next command in the
chain was run, which is precisely how humans interact with remote computers. Humans (with few exceptions) wait for output from
each command before sending another. This is a VERY simple example – but it demonstrates the fact that FullAuto can handle
multiple commands and correctly partition the output. In summary - FullAuto is orders of magnitude better than other automation
solutions because it works just as a technician would. This is precisely the reason why IT personnel with average skills can master
FullAuto – because it simply utilizes the skills they already have – instead of requiring more advanced ones.
Another reason FullAuto is orders of magnitude better, is the fact that FullAuto captures errors and reports them simply
and accurately. Recall what Randall Schwartz said earlier about error information coming over a PTY:
I think you're just gonna have to consider the various things to wait for, some of which will be error-like
messages coming at unexpected times.
FullAuto’s unique parsing algorithm correctly identities error output and makes it available to the program/script in a
simple and familiar fashion. Let’s revisit our example, but use only two example commands instead of four:
Now we will modify the code slightly to enable it to provide error information to the program/script:
Note how we added the variable $error – and the means to print it out if it has a value. Now let’s change the command ‘hostname’ to
‘hostnam’ – which is an invalid command:
Now note what happens when we run FullAuto:
Note how the error message is presented to the program/script. ERROR=-bash: hostnam: command not found This demonstrates how
easily FullAuto captures both OUTPUT and ERROR conditions in a way that is easy to capture and code for.
Which leads us to yet another reason FullAuto is orders of magnitude better than other solutions: the dependency
problem presented earlier. Recall from earlier in our discussion how the “Chef” automation solution attempted to handle this problem
– utilizing a database to act as a kind of “radio” that “trucks” (“command wrappers”) could use to check the status of earlier deliveries
that their load depended on. With FullAuto, there are no “trucks” – so there is no need for such a “radio”. In FullAuto there are no
“command wrappers” that have to check for the status of earlier “deliveries”, because in FullAuto no command will be run unless the
earlier command completed successfully. All other automation solutions use a dispatch or time-sensitive form of command-chaining.
A command is run on a schedule – and after a fixed amount of time, the next command is run, regardless of the success or failure of
the earlier dependent command. Chef ‘s command wrappers have the ability to check earlier commands for success or failure – but
such checking requires a LOT of coding, and configuration; demanding a higher level of skill to utilize than FullAuto. On top of that,
the processing is MUCH slower, because commands are still launched independently with no direct relationship to earlier
commands (unlike FullAuto). Finally, Chef makes much more significant demands on hardware and the network with numerous
database calls in addition to command execution.
FullAuto does not use time-sensitive chaining at all – so there is no timing configuration to worry about! (Another factor
that makes the old way of doing process automation hard.) Rather, FullAuto uses direct output-chaining. Let’s go back to our
example to demonstrate the principle. Imagine that a conveyor belt was set up on our catwalk connecting the two warehouses. As
packages arrive at the destination warehouse, they are quickly inspected, and if one is found defective, a button is pushed and the
whole conveyor stops. No “radios” are involved! No “trucks” that have to be independently told to wait or abort. The code to
implement this is VERY easy. Examine the code sample below that is similar to the earlier example except that now the first
command is broken, and the second one is now fixed. After the first command, an exit instruction was added after the error print
statement:
Now let’s run the code and see what happens:
Note – the ‘hostname’ command did not run at all. FullAuto correctly detected the error with the first command, and exited as
instructed. No other automation solution is this simple – this straight forward. Again, no “geniuses” required. No databases to worry
about! Exiting was a simple way to demonstrate the principle – but any form of error handling could be introduced, including
extremely complex approaches. There is NO LIMIT to the capabilities of FullAuto in this regard!
Another reason FullAuto is orders of magnitude better, is the fact that FullAuto is
NOT client-server architecture. What does that mean? As mentioned earlier, client-server
architecture means that a program has to be installed on EVERY computer in the process
chain of command execution. All commercial products require a license fee to install these
programs on each and every computer. All FullAuto requires on remote computers is an
SSH service. On Linux and Unix systems, SSH is part of the operating system itself – so no
installation of any kind is required. For Microsoft Windows, SSH is easily added via the
highly respected and globally utilized Cygwin Linux emulation layer for Windows.
http://www.cygwin.com SSH is FREE – either the native version on Linux or UNIX, or the
one supplied by Cygwin for Windows. Client-server architecture also often means that
custom code or instructions need to be written for and managed for EACH computer! This is yet another reason that doing
“computer process automation” via the old fashioned approach is hard. FullAuto custom code (or instruction set) all exists in ONE
script, on ONE computer – so it is EASY to keep track of all the instructions that control the entire automated process of command
execution – no matter how many computers are involved! All of it is in ONE place, on one single computer, in one custom code file.
IT CAN’T GET ANY EASIER!
Another reason FullAuto is orders of magnitude better than any other automation solution currently available, is because
FullAuto enables average IT technicians, with average batch and script writing skills, to successfully code and manage ROLLBACK. This is HUGE, and this feature alone will make FullAuto a compelling choice for IT organizations everywhere. To illustrate
the capability as simply as possible, imagine we have a directory with five “packages”:
Note how package-4 is spelled differently from the other packages – it has a “dash” and not an underscore character. We want to
rename the files from package_1 to package_one – and so on. Now let’s examine the “roll-back” code we would use:
Now let’s run it and see what happens:
Note how three packages were named successfully before an error was encountered with the fourth package. FullAuto reported the
error, and then proceeds to successfully ROLL BACK the first three commands so that when FullAuto is finished, everything is
restored to its original state. Indeed there is more code, and more complexity than in earlier examples, but it is NOTHING compared
to what is required to successfully achieve this behavior with other automation approaches.
How is all this possible? How did an ordinary guy come up with a solution to a problem that escaped the legions of
geniuses employed by IBM or Microsoft? How did it escape all the academics at MIT? Simple really – precisely because the
“experts” considered the problem impossible to solve, NOBODY attempted to solve it! This created an opportunity for anyone who
didn’t buy in to that assumption, to search for a solution without fear of competition, and to take as long as it needed. It took fifteen
years, but a solution was found – and it is ready to take the world by storm.
FullAuto has been used to do automated deployments of new code for large organization intranets. FullAuto is also being
used by web developers to synchronize their local development environments with development servers – saving time and effort
and reducing mistakes. FullAuto was recently called upon to solve a vexing production problem intrroduced with a recent upgrade to
Liferay Portal technology. Very quickly it was discovered that there was a caching issue that prevented web business authors from
seeing their changes. The workaround was having a developer visit all 4 nodes and manually clear the cache. This was very time
consuming and became a frequent activity. Since a fix from Liferay was not quickly forthcoming, the team turned to FullAuto for a
solution. In less than four hours of development time, FullAuto was doing the job of clearing the caches on all 4 nodes every five
minutes – 24/7. This is the kind of power and flexibility that FullAuto can bring to the table. No job is too big – or too small.
FullAuto is a critical component in migration & redesign projects. The TeamSite content management system is an
example of an old infrastructure being retired at a large organization. The data now being managed in TeamSite has to be migrated
to the Liferay content management system. There is just one “slight” problem (cough), the data formats of the two systems are
completely incompatible! This is precisely where FullAuto’s strengths come into play. Because FullAuto can access any data,
anywhere, in a secure and reliable fashion, FullAuto has become the middleware that pulls the data, transforms the data, and
delivers the data to Liferay in a format that Liferay can utilize. The alternative is to do the job manually, a job requiring thousands of
person hours. That’s right – thousands! With FullAuto, that time was reduced to a few hundred hours, which could be divided
amongst existing staff; no need to bring in outside resources for assistance.
FullAuto is already delivering on its potential with large organizations. And it’s doing this mostly under the radar. Most of
the world is largely unaware of FullAuto’s full potential – and what is needed is a showcasing of FullAuto’s wide ranging problem
solving capabilities.
So what exactly are FullAuto’ s “wide ranging problem solving capabilities?”
Glad you asked!
At a recent IT Operations meeting, the speaker was the new head of marketing. She was asked at the end of the
presentation, what was the one thing she would most like to see from IT. Her answer was quick, direct and emphatic: “Solutions that
don’t cost half a million dollars!” The remark generated an uneasy laughter – for it’s true that IT organizations are always on the
lookout for the next BIG project, with big funding and big visibility. But that is yesterday’s paradigm, yesterday’s approach to problem
solving. Change is needed, and it will come either voluntarily or “violently” – in ways none of us want to contemplate. I’m all for the
voluntary approach!
FullAuto can be a key component in helping to deliver needed solutions in a fraction of the time taken by more traditional
approaches, and with a fraction of the cost and a fraction of the resources needed. In this day and age, organizations have to do
MORE with LESS. In fact, it’s more true than not, that organizations will have to do a LOT MORE with a LOT LESS if they wish to
survive the next decade of titanic changes facing everyone everywhere.
We have discussed FullAuto’ s exciting potential as a problem solver. But it’s wise to also consider some challenges many
will face when contemplating a move to FullAuto. The biggest one is the fact that FullAuto will not be doing anything truly new.
Rather it improves upon what is already being done by other software and other personnel. This is good and bad depending on
one’s “perspective”. For cost conscious leadership, it’s all good; but from the perspective of your average IT technician – not so
much. All over the world scores of professionals are earning a living – a good one in most cases – performing manual processes,
and managing client-server automation architecture like Tivoli and Control-M. Their goal is less saving money for the organization,
and more about how they themselves can remain “mission critical” to the organization. This is why it is more likely that FullAuto
processes will be dispatched from Control-M rather than FullAuto being allowed to be its own dispatcher. The Control-M folks will
wish to remain “mission critical” and will insist that their involvement is “necessary”. The reality is that FullAuto will be called on to do
small difficult jobs before it is trusted with bigger, more mission critical ones. The Liferay caching problem discussed earlier comes
to mind. So while the potential is truly unlimited, that is not to say that reaching that potential will be easy or quick. It won’t be. It is
suggested to IT leadership interested in seeing FullAuto used to save money and increase process reliability in their organization, to
mandate or encourage their employees to get training in how to use FullAuto to solve numerous problems – which becomes its own
ends rather than means. Training is its own deliverable, and when the course is completed, the accountability returns to the trainee.
It is wise to leave the choice of how and where to use FullAuto (if at all) up to organization IT personnel - because they’re going to
insist on participating in that decision anyway, and will get their way (especially in the early going) more often than not.
An example of this was experienced a couple years ago. The author was asked to create a POC (Proof of Concept)
demonstrating how FullAuto could transfer files via ftp from multiple folders on one computer, to just one folder on the remote
computer. After less than two hours of development, the solution was complete, ran in a few seconds, and was very reliable. Still, it
lost out to a Control-M solution where every folder became a separate job – around twenty of them or so. None of them connected
to each other in any way, which made error conditions much more challenging to resolve. And on top of that, each job required a
license fee! (around a hundred dollars or so a license). FullAuto lost out because it was deemed too “experimental”. Trust is an
obstacle – difficult to win and easy to lose. The roughly $2000 for the Control-M solution was considered a negligible cost. Se la vie!
The Control-M folks won that round – and remained “mission critical”. And they will continue winning for some time to come.
FullAuto will not replace or even directly compete with Enterprise Management Systems, or enterprise software testing
suites, or high end intrusion detection software any time in the near future. Rather – FullAuto will first compliment these systems.
FullAuto is still a very new innovation, and it will take time before it has all the features (and trust) that these systems possess (such
as dashboards for data centers, etc.) However, there are LOT of smaller jobs that FullAuto can do better, orders of magnitude
better, faster, easier and more cost effectively than these systems – jobs these systems are mostly not doing anyway, jobs
performed mostly manually by IT technicians because they are too infrequent or unpredictable, or not mission critical enough or too
difficult to justify the cost of running on enterprise automation systems like Tivoli and Control-M.
FullAuto can help because FullAuto has the unique ability to get at data wherever it is, transform it into anything
imaginable, and deliver it anywhere – quickly, securely and reliably and without needing Mensa members and endless budgets to do
it all. One of the more recent FullAuto innovations, and one of the most exciting, is FullAuto’s ability to automate the web – both the
internet and intranets. Using a relatively recent plugin created for Mozilla’s Firefox browser (also an open source and license-free
application), FullAuto can directly automate any web functionality directly through Firefox’s back end. FullAuto can do this for
precisely the reason others can’t as discussed earlier – parsing the output of multiple commands sent to Firefox via a telnet interface
(the predecessor to SSH). With this capability, FullAuto is able to login to the Liferay Administration Control Panel, and directly pull
out data the same way a user would. This is one of the ways the TeamSite migration project is made manageable. The reason
FullAuto can do this is that Firefox’s plugin, MozRepl, is one of only two methods known for a script being able to execute Javascript
in a persistent fashion. The other is a rather experimental browser emulator application that is extremely difficult to install and
manage – and therefore not in widespread use. With the ability to execute Javascript, FullAuto can successfully automate the web
with less code and more success than any other web automation solution. Web automation ALONE will be one of the biggest
reasons folks will want to use FullAuto.
FullAuto will soon be able to access the Mainframe via 3270 – in addition to SSH.
With all this ability to access any hardware – including routers, bridges and controllers in addition to “computers”, FullAuto
can do just about anything. It can do:

Automated Software Builds and Deployments

Data Transfers and Data Migrations even between incompatible systems and formats

Data Consolidation

Backups and Restoration

Remote Software Installation

System Patching

System Monitoring

Configuration Changes

User Administration

Job Scheduling

Reporting

Notifications

Vulnerability Testing

Intrusion Detection

Unlimited Database Access and Automation Capabilities

Windows GUI automation (with AutoHotKey)

Web Automation (with Firefox MozRepl)
In fact, it would be more difficult to imagine a task FullAuto cannot perform, than finding those it can!
Finally, FullAuto itself can be controlled and executed from other applications. Custom web based user dashboards can
be created relatively. FullAuto can operate as a service that can be accessed via web services technologies utilizing XML or JSON
(such as SOAP and REST). FullAuto can be called and executed by other automation systems like Tivoli, Control-M and Chef – for
these will not be going away anytime soon. FullAuto can do the whole job, or just a tiny part of bigger job. Its flexibility is practically
unlimited.
It is my hope that I have succeeded in generating some excitement about FullAuto. I look forward to meeting with all who
are interested, and discussing more about how this exciting software innovation can solve stubborn process automation problems
and help save significant money for countless organizations all over the world.
Thank you!
Brian Kelly
Brian.Kelly@fullauto.com
Cell:
201-248-6832
Download