Andrew Hewus Fresh <afresh@grantstreet.com>
Hi, I'm Andrew, and I'm here to extoll the virtues of laziness and impatience.
If you're impatient and notice when things annoy you you can use that to build some proper laziness and be happier.
I've been working at GSG for a few years now, so some of this has been ongoing for a while.
My goal is to get you to think of something that annoys you, as well as a solution you are going to try.
Be sure to look because if they had the hubris to publish, maybe it's good.
Alex Gaynor gave this talk at PyCon 2016Before I really get started, I want to first tell you not to write code if you can help it. Let other people write the tools for you.
Alex Gaynor gave this talk at PyCon, although he used different examples. I thought about just playing the video and letting him do the work for me, but . . .
There isn't always something you can use and while finding something that is close but just enough wrong to be frustrating.
Maybe you can customize them, maybe not.
When something is annoying, fix it.
Like your environment
Annoyance | ||
---|---|---|
More | -> | Less |
OS-X | -> | OpenBSD |
Aqua | -> | cwm |
One of the things that can be most annoying is your environment.
For me that means OpenBSD is less annoying than OS-X.
While I don't expect you to use my environment, figure out what makes you happy. This is important because it gets you into the mindset that when something annoys you, find a way to reduce that frustration.
At least OS-X has basic tools like grep in the base system
And bash
Most annoying to me is when it tries to be too helpful
So I spun up an OpenBSD VM
Work provided a mac, at least has grep and such basic tools. Also has bash and requires you use a mouse. OS-X just tries to be too helpful and gets in my way. I'm the least annoyed in OpenBSD.
Fortunately I was able to spin up an OpenBSD VM to do work in. This nearly counts as writing my own tools as I have an @OpenBSD.org account.
VM runs full-screen and hides most of the distractions. I even compiled mksh on the linux hosts I use most because bash drives me nuts.
Aqua really annoys me
for example, shortcut keys don't work like I expect
cwm is very minimal, but provides all the features I need.
Stays out of my way
Comes with OpenBSD
Some folks like Aqua, the OS-X desktop environment, it frustrates me. The other day, had a full-screen window, Aqua decided it should go away, so kept switching desktops. No idea why.
I use cwm (the calm window manager) that comes with OpenBSD.
Stays out of my way.
Also counts because I once sent patches in.
I think my first C patches ever.
We track time per-project (because business)
This makes sense, but is annoying
Timetracking. Important, but ugh!
Instead "Yay, an annoyance to solve."
the tool might just be a tiny bit of polish to clean up a rough edge.
cwm has an "exec" hotkey
I just meta+shift+/
type sta Meeting about Stuff
#!/bin/sh
type=`basename $0`
[ $type = nt ] && type='' || type="$type "
echo "$( date +%Y-%m-%dT%H:%M:%S ) $type$@" \
>> ~/time/$( date +%Y-%m-%d )
Automatically adds a log line with a timestamp
extract a summary report
and update the timetracking system
One of great things about cwm is the "exec" menu. Type Meta+Shift+/ brings up a command-line where you can run anything.
I happen to have two symlinks to a shell script, "sta" and "sto". Lets me log time without switching contexts.
Other folks tell me of their solutions and it involves clicking menus and boxes popping up to ask questions. Not for me, but it suits them.
Something less terrible from other people.
This makes my life better
All I had to do was complain for a couple years
Someone else implemented it and made it work
Using tools someone else made is
even better than writing
them yourself
One last thing before some examples of things I've written. Being the squeeky wheel.
We now have a whole team dedicated to making my life better, unfortunately other people know about this team and my requests don't make the top of the list.
A thing that makes my life better is that we now have a "darkpan" that holds our internal modules that get installed with a modified Carton/cpanm.
I have no idea the details of how it is set up, and don't need to care as they are someone elses responsibility. However, they make dependency management so much better than what we had.
I've now been squeeking about getting things moved from our "gsgpan" onto the CPAN.
For the last 18 months I've been mostly dedicated to a single project
An auction platform for a new customer
It gave me an opportunity to focus on one thing
Which caused some boredom and so little things annoyed me
This turned out to be a good thing
Yet Another Great Trait - Boredom
The following examples came up because of a project I've been working on for the last 18 months.
Auction platform.
Focus on one thing made little things annoying.
Turned out to be a benefit.
Lots of testing requires interaction with live systems
Some of those systems you can manipulate and use
Some you can't
Either way, more annoyances
Lots of times people say "reuse code" and have poor examples
these seemed like good examples
Interaction with outside systems is annoying.
Some things are easy to provide, like a new clean database.
Some things aren't. Like what time it is.
Ended up doing a lot of mocking.
Was doing it via tried and true "copy-paste" code reuse
That ended up being a lot of work.
my $mock_now = $test->mock_now
my $mock_email = $test->mock_send_email
my $mock_log = $test->mock_c_log
We use Test::Class for our test infrastructure.
I'm hoping to move a to Exodist's new Test2 stuff,
but we'll still need something like this.
Turns out I'm too lazy to copy and paste.
I've recently added several utility methods to our Test::Class subclass.
Test::MockObject
$c->log
.You write code to make testing easier
That code sometimes sneaks in and makes production code less annoying.
Sometimes code-reuse comes from the strangest places. Like testing infrastructure.
Look for ways you can repurpose code you've already written.
There are two pieces to this, first I was writing tools to make testing less annoying.
Then I got to reuse that code in production.
I was writing a lot of tests for this new project
Most of them set up a scenario and do something
So much boilerplate was getting in the way of clarity
One of the most annoying bits I ran into writing tests was setting up the auctions I was going to be testing.
You have to set a lot of boilerplate you don't actually care about testing to get a valid auction.
Copy pasting a "valid" auction, annoying and which of the 20 settings is the special one the other test was testing.
I want it to be obvious what's important and not show the rest.
I feel like I learn new things about testing and Ovid has already given a talk, written about, and has a module on the CPAN. Go watch his talks.
# This auction will end in an hour
my $auction = My::Model::DB::Auction->create_with_defaults(
{ start => $now->clone->subtract( hours => 1 ) } );
# This auction will start in five hours
my $auction = My::Model::DB::Auction->create_with_defaults(
{ end => $now->clone->add( hours => 7 ) } );
# Now, this auction starts ten minutes after it ended
my $auction = My::Model::DB::Auction->create_with_defaults(
{ start => $now->clone->add( minutes => 5 ),
end => $now->clone->subtract( minutes => 5 ) } );
->create_with_defaults
method.
The important thing is `My::Model::DB::Foo` will
hit a My::Model::Defaults::Foo
and ask for the parameters to pass to ->new
.
Smart enough to know what settings effect the other values.
Takes some work to give smarts, but worth it.
ask for an auction that starts tomorrow and get it.
->create_with_defaults
that
calls $class->create( $class->defaults_class->col_data(\%params) );
Wanted a sample auction folks could bid in
Wanted to give them a limited list of options
Had to set the other values
.... Oh look, I've got this thing
How did this end up in production?
Customer wanted a bidders to be comfortable with bidding by practicing in a trial auction.
Bidder picks from a subset of settings. We need to provide reasonable values for everything else.
Hmm. This sounds familiar.
Model::Defaults became the Auction Simulator creation.
Code you write to actually do something helps with testing
Sharing code can go the other way too, sometimes something you wrote already will be helpful.
This project was full of things I wrote for one purpose that I had suspicions could be reused.
When you pay attention, you notice these things.
We don't expect a slow uptick in use
The first auction will be super important
Will it melt down under the load?
We expect the first auction to stress the site.
Already lots of potential bidders.
And several "second" sellers if the first goes well.
Still finding someone to take the risk and be the first.
The first auction is super important.
Hold that thought.
The simulator worked, but was boring
Added "bots" that would bid against you
Which makes it fun and visually exciting
Back to the simulator, which, it turns out, was pretty boring.
I created "bid bots" that bid in the simulator.
Configurable to bid in different manners. Not an advanced game engine, but good enough, tho very deterministic.
Not yet live so no demo. Even though it's really cool.
Due to making it testable, it's also reusable.
We needed to stress test the site
The simulator code became the load test tool
Scrapes the live site using WWW::Mechanize
Generated a lot of load and bottlenecks became obvious
Remember that we needed to load test the site? We didn't have to outsource. Bots are cheaper.
Went from talking to the Model to using WWW::Mechanize.
Was able to generate a *lot* of load and that made the bottlenecks obvious.
Sometimes just a little wrapper is all you need
Some of the fixes I found weren't huge frameworks or large changes they were just small tweaks to what exist.
You can get a big impact from just a little bit of code.
When a small bit of code needs to be fast it needs
But that's annoying.
Finally, on to some code I wrote.
Those bottlenecks from the load test showed this piece was what needed improvement.
A core bit of functionality needed to be fast. Really fast.
So wrote up some "Benchmark" tests for it.
But, then I wanted to see what was actually slow, so wanted to profile it.
Like normal, run the benchmark under the profiler
perl my_benchmark.pl
becomes
perl -d:NYTProf my_benchmark.pl
but with a separate profile for each benchmark.
Run the benchmark like normal, and profile like normal, and you get one profile with all the runs in a long time.
added a wrapper function that would either benchmark or profile depending on how you ran it.
which made it amazingly easy to see how the code was performing and figure out where to look at what to improve.
Haven't had time to make it reusable yet, but could.
my $profile_dir;
BEGIN {
if (%Devel::NYTProf::) {
DB::disable_profile(); # $ENV{NYTPROF} = 'start=no'
$profile_dir = "./nytprof";
mkdir($profile_dir) or die "Couldn't mkdir $profile_dir: $!"
unless -e $profile_dir;
} else {
require Benchmark;
Benchmark->import( qw( timethese :hireswallclock ) );
}
}
sub profile_or_timethese {
my ( $count, $tests ) = @_;
return $profile_dir
? profilethese($tests)
: timethese( $count, $tests );
}
sub profilethese {
my ($tests) = @_
foreach my $name (keys %{ $tests }) {
( my $safe_name = $name ) =~ s/\W+/_/g;
my $dir = "$profile_dir/$safe_name";
mkdir($dir) or die "Couldn't mkdir $dir: $!";
warn "Profiling to $dir";
DB::enable_profile("$dir/nytprof.out");
$tests->{$name}->();
DB::disable_profile(); # otherwise we get it in the report
DB::finish_profile();
}
}
profile_or_timethese( 10, {
"foo" => sub { foo() },
"bar" => sub { bar() },
"baz" => sub { baz() },
} );
I don't expect you to read this, but it's all the code I needed.
Chooses whether to benchmark or profile based on whether you had -d:NYTProf
profile_or_timethese is just like timethese from Benchmark
writes separate profile for each test.
oohh, ahhh, flame graphs. Brendan Gregg++
Now I want to know how other things perform
So I tried using Devel::NYTProf on our Catalyst app
Yes, you can enable single process mode and such nonsense
Still annoying
After whetting my appetite on profiling, I wanted to know how more things performed.
For example, what was our Catalyst app doing?
Have you tried using NYTProf on a Catalyst app?
Profiling Catalyst is annoying.
Profiling Catalyst is annoying
sandbox -DNYTPROF
Each request generates a separate profile
Adds a log of requests to
http://auction.example.com/nytprofile
Lazily generates the report the first time you click the link
Catalyst profilng is a pain. Sure you can stop it, enable profiling in single threaded mode and make a request, but I'm too lazy for that.
Added a NYTProf bundle that sets everything up.
"Bundles" are an internal extension, kinda like Catalyst roles, can have all Catalyst components and get "injected" into your app. Wish we didn't have so much customization so I could share it.
Start app with a `-DNYTPROF` flag and it adds the bundle. Each request is slow but writes an nytprof.$request_id.out file. Plus "/nytprofile" action listing all the requests.
Automatically generates the HTML profile on demand.
Now I have all these tests
And want to run them
up-arrow + enter == super annoying
reprove!
https://gist.github.com/afresh1/b026d48d0cb2358ea2e6
A lot of what has been annoying me recently is testing. So my tools have been for that. Here's one you can use. I treat it as "re-prove", but it does look on you reprovingly when you're unable to get things working. What it does is re-run prove whenever a file changes. Not a real project, no time yet. Demo!
During all this development, the one thing that makes me happy is that I don't just open a browser and click "reload" over and over. Even just running "prove" was fairly annoying. I saw this Ruby TDD video and in the spirit of stealing great ideas, "reprove" was born.
Writing more tests means longer test suite runs. reprove will run a subset of tests. While reprove helps by running a subset of tests, soon I hope to work on making the tests fasterer.
Normally if it spots something that isn't a test, it runs prove with whatever arguments you originally passed, which could be just a subset of tests. If it notices a change to a test file it will just re-run the tests found in that file. I have ideas to add support for looking at test coverage results and in conjunction with `git diff` re-running the tests that seem like they'd exercise the code you've changed. But so far it works well enough and is in a gist on github.
My problems are probably not your problems
Pay attention to what annoys you and make it stop
???