Volt Breeze, Testing, Traits, & Inheritance

Matt Stauffer:
You are now listening to the Laravel Podcast.
Hello everybody and welcome back to Laravel Podcast season six. I'm one of your hosts, Matt Stauffer, and Taylor, you want to say hi?

Taylor Otwell:
Hey, everybody.

Matt Stauffer:
That's Taylor Otwell, the man, the mystery, the legend, the founder of Laravel. And today we got a lot of topics, but the big focus of today is going to be about testing. But before we get there, there is a new release or two. I know that Pale came out with an official release, but we already talked about Pale last time, but there's also a new Breeze Stack. We talked about one last time, but could you share what's going on with Folio and the Folio functional, everything like that in the Breeze stacks if people aren't familiar with Folio?

Taylor Otwell:
You mean Volt?

Matt Stauffer:
Sorry, Volt. Yes, Volt.

Taylor Otwell:
Yes, Folio is cool but we mean Volt.

Matt Stauffer:
Yes, we mean Volt.

Taylor Otwell:
We released a new Breeze stack, probably our last stack for a while for Breeze. It is Livewire stack, but using the Volt functional syntax for your components. What that means is, your component logic for your Livewire components is right there in the template, and not only is it right there in the template, it has this functional style that is like if you want to define a method, you just define a closure and give the variable a name and then you can assign that to a wire click handler. If you want to define some state variables, you just call the state function and tell it the name of the variable. It's similar to the view composition API versus the view options API, what it feels like the difference between those two things. But anyway, we released a new stack for Breeze that ships with that out of the box. So that is actually a really easy way to try it out.

Matt Stauffer:
And when you put that announcement out, you mentioned that you can now use the Volt, if I follow you, you can use the Volt function or the Volt class-based, so at some point we switched from the traditional Livewire to the Volt class-based as the primary when in Breeze, right?

Taylor Otwell:
We've actually never had a true Livewire stack and breeze that was separate. That's only Jetstream.

Matt Stauffer:
See, I was [inaudible 00:02:10] the words of that one's mixed up.

Taylor Otwell:
There's so many packages now. Jetstream is still the same, but in Breeze we only have Volt Livewire options in Breeze.

Matt Stauffer:
Because I mixed up Folio and Volt, let's just real quick talk about. Volt, you already explained what it is, it's the new syntax, the all in one view components version of Livewire. Can you talk real quick about Folio since we haven't actually covered it extensively?

Taylor Otwell:
Folio is super simple. It's just a page-based routing package for Laravel. You can drop a Blade template in a pages directory and just go straight to it in your browser using some routing convention. So if the template is about-us.blade.php, you can go to myapplication.com/about-us and it renders the template. And you can do all sorts of wildcard and parameters very similar to Next.js page-based routing in Laravel.

Matt Stauffer:
And one of the cool things about working with Folio is because of Volt you can also do significantly more functionality in those individual pages than you would've, had we had Folio without Volt, which is why they're intertwined in my head, and you released them around the same time and everything. Volt makes Folio so much more powerful.

Taylor Otwell:
Yeah, exactly. I think Volt makes Folio legit useful versus just an interesting toy.

Matt Stauffer:
Totally. All right, let's talk about tests. We got a ton of questions about tests, but I also mentioned you earlier that I've been reading tons of code, tight in a hiring around in the last couple of weeks. I've just read just so much code from folks and they're all building the same application, but we could really see how they do it differently in this take home. And one of the things that I've noticed is that people write tests very differently. When I was looking at the queue of questions we had from people about tests, I was like, "This is really relevant to what I've been thinking at." The biggest question here has just been, what are the best practices for writing tests? And obviously that's a broad thing. And anytime someone says best practices, it goes a little bit of a yellow flag going, "Well, best practices in what context?" Or whatever.
But one of the things I wanted to just see is we can talk through at least what helps us make the delineation in a given application between one or the other. So it's not like one way is right and one way is wrong, but more it depends kind of thing. I wanted to start at the highest level, even though Pest, which is a testing framework created by a Laravel core member, and it's now being shipped and everything like that. It's been around for a while, you can choose Pest when you're starting to do Laravel app. A lot of Laravel projects use Pest testing.
I think a lot of people either don't know about Pest or what they know about Pest is purely just that it's that different syntax, like the if whatever type of syntax. Could you talk a little bit about test? And for anybody curious, Nuno, the creator of Pest, Laravel podcast episode from last season where he goes into much greater depth. So if you really want to get all the details of it, go there, but Taylor, could you give us a real quick rundown of what Pest offers that makes it motivating for you to offer it as a potential syntax for folks? not even syntax, but a potential test runner for Laravel apps?

Taylor Otwell:
Like you said, Pest removes a lot of the noise from traditional PHPUnit testing suites. You can do all of the normal things you can do in PHPUnit, but you can also do a bunch of other things. And just the overall syntax style, it feels similar to Volt where it removes a lot of the code noise and boils your test down to just simple functions that are invoked, but it has a bunch of other cool stuff. I think the way... or I don't know.
I don't if it's really new stuff separate from PHPUnit, but it's just more ergonomic, I would say. The way you use data providers in PHPUnit is like I believe putting a comment annotation on your data provider method and blah blah blah. Whereas in Pest you might just chain on a width method or I don't remember what the method is and give it an array of data and that's your data provider. A lot of just really more developer experience focused niceties, RN tests, I think. It also includes some stuff that's kind of just not in PHPUnit. There's the architecture testing plugin where you can make sure that all of the classes in a certain directory maybe-

Matt Stauffer:
Have a certain-

Taylor Otwell:
Don't extend anything or have a certain style, have a certain suffix or whatever. I find that nice. I actually don't have a ton of experience using Pest and production projects. It's not really a first party tool that we maintain here at Laravel, it's just a very popular package that's maintained by Nuno who now happens to work at Laravel.

Matt Stauffer:
Yes, exactly.

Taylor Otwell:
And we have it as an option in our starter kits because it has become so popular in the Laravel ecosystem. Very similar to Livewire or Inertia where a package becomes so popular, it feels quasi first party at a certain point, especially when the person they're maintaining it works at Laravel. A lot of people seem to like it and I think for the developer experience of the whole thing.

Matt Stauffer:
And for those who don't know, Pest both has a different syntax which is similar to our spec, so it's like the whole it and then you write a nice string rather than having to give your snake case method name and then inside of closure you do your test there and that's where you're saying there's all these niceties where you're chaining things on and there's a lot less of the croft of a class or whatever, but for those who are maybe unsure about it, you can use Pest to run PHPUnit style tests and you still get the benefit of a whole bunch of additional tooling that Pest comes with, that a lot of people add to all their PHPUnit test apps anyway, like parallel testing and coverage and this custom architecture testing tool and you're talking about and stuff like that.
There's a lot of tools that Pest brings in even if you choose to use the PHPUnit syntax, and I did not know that when I was talking to Nuno originally, I thought Pest meant the RSpec syntax and that's it. So when I learned that it's like it's building all these features on top of PHPUnit and also adds an optional syntax, I was like, "Then it makes a lot more sense just to work with it."

Taylor Otwell:
Yeah, snapshot testing, things like that.

Matt Stauffer:
Let's dig into some specifics of writing tests, whether it's in Pest or in PHPUnits. So the first question that I see come up, and it came up a lot in a lot of my interviews over the last couple of weeks is unit versus feature test. A lot of people have these very strict definitions of what a unit test is and what a feature test is. And if it's a unit test, it's not allowed to touch the database and stuff like that. And I know historically people have said a unit test is a test that tests only one thing, but what that one thing is allowed to be can be a little bit of a different definition. Is that one thing involved doing the database or not? In your mind, do you have a really strong delineation of feature versus unit test? Is it something you think about a lot?

Taylor Otwell:
I feel like I write... It's rare that I write something that I put in the unit test directory in a Laravel app. I just feel like that's really rare for me. It would usually be some specific... Let's imagine you had a method that performed some array manipulation to put an array in a certain structure, or some mathematical computation, like a formula. Maybe you're calculating bonus pay for a certain period for an employee and that's this specific math calculation, that feels like something that can be unit tested, that formula, and it's probably not hitting the database. But I just feel like it's so rare and the applications I build at least where I have something like that even to test. Most of my tests are feature tests, most of them hit the database in some way, most of them even hit routes in some way. Those are most of the tests I write because I just feel like that's the types of applications I'm building and they're also the tests that just give me the most confidence in the overall stability of the application.

Matt Stauffer:
Sometimes-

Taylor Otwell:
Do you find yourself writing many unit tests?

Matt Stauffer:
Yeah, sometimes, we built some projects recently where we don't even know what the application is going to look like yet, but we are working with a researcher or something, or an analyst who's like, "I have this magic formula that I've built in Excel." And we need to represent that magic formula and then build an interface in front of it. And so one of the things we do often is build a black box, like PHP class, that represents the formula. Because it's usually not a formula, it's usually 30 different formulas with 50 different inputs that output one or more pieces of data. And often you can put data in or out without touching a user interface, a route or anything like that. That's my most common use case for unit tests because it is actually a unit, this black box class or set of classes takes inputs, spits out outputs. Then later we'll plug it into places and then we don't have to worry quite as much that our feature tests are testing the math or the code or whatever, the math of it, it's more just testing the traditional app. Things like if you get an input it's validated or not or whatever.
Outside of those things with a very specific nuanced core identity, or math, or calculations, or whatever, it's pretty infrequent for me. And one of the things you mentioned, like a pay structure, I have a calculator that I run that handles profit sharing for Tighten, so it calculates how much profit share everybody gets every quarter. And it is doing lots of calculations that I need to make sure it gets right, but a lot of it's based on the database.
That's where I was like, "I guess you can call it a unit test." Because it's saying, "In this circumstance, what does this calculation run?" But I put it into my futures test directory because it requires, here's 20 people, each of whom has the last six salaries that have been at different variable and plugging the calculations and make sure that it correctly gets the latest salary for each person even if they're right in the edge of a new salary. Even those complicated calculations I'm doing with that one are still so connected to the database that it's very much feature test. The answer is not very often, but every once in a while when we've got one of those little black box calculators.

Taylor Otwell:
I actually just pulled up Laravel Vapor here on my end just to see what we had in our unit test directory. And one of the tests we have, it looks like a custom validation rule for valid repository name. So it's basically a string validation check and we feed a bunch of strings into it and it spits back true or false. So that's very unit testy to me. It's not hitting any database or anything like that. And it's probably helpful to clarify for people, what is actually the difference between the feature and the unit test directory in Laravel from a tech perspective? And the main difference is testing the feature directory actually boot up the framework and call all of your service providers, they bootstrap the whole Laravel app.
Whereas testing the unit directory just extend the PHPUnit test class and they don't do any framework booting, you can still instantiate any of your classes because composer autoloader is registered, but none of your service providers have run, you can't call into the app. What does that mean? Basically it means that your unit tests boot up a little faster in terms of your test suite, because we don't have to bootstrap the whole Laravel app. Now, does it actually make a difference in your application? Probably depends on how many unit tests you have. And if you just have a few, it's negligible and not really going to make a difference. But that is the tech difference between putting something in a feature directory versus a unit directory in a Laravel context.

Matt Stauffer:
That's very helpful. And so the general rule here seems to be, if you are relying on your Laravel application to be booted, then it should be a feature test. And if you're not, then it shouldn't be. And that's great because that's more than just database, that's also, if it relies on any aspects of your service providers having been set up or anything like that, you can then say, "Then it's got to be in the feature test." Obviously that's a Laravel specific thing, but it's a Laravel podcast so we can talk about it that way. Sweet.

Taylor Otwell:
One sort of epiphany, I don't know, this is moving deeper into the testing thing-

Matt Stauffer:
It's fine.

Taylor Otwell:
... but I think it was a big epiphany for me and my testing career so to speak, where I had this phase of testing where I was mocking everything it felt like, and injecting everything into my controllers and all of those things were mocked, and it started to feel like my tests were... I think I've tweeted before, it felt like my tests were becoming spellcheckers to make sure that the right methods on certain mocks were called, expected this method, they called in, return this, expected this method, didn't return that.
And it's like, "What?" I'm just rewriting the method itself in my test. I'm not actually testing any behavior. And early in my career it felt like a lot of my tests looked like that because I think it depends who your influences are, but a lot of sort of testing gurus had this very mock heavy approach it felt like at the time, and that's what I learned. But as it moved further into my career, I went the opposite direction where I just barely rarely mock anything. I hit the database in every test. A very typical test for me will be hit X controller endpoint or some URL, expect that some result is returned from the controller and make sure the database looks a certain way by either querying the database or making sure the right data is in place. That's very typical. 90% of my tests look like that.
And that just gives me so much more confidence than the old over mocked, over complicated noisy tests that really just is so brittle because every time the implementation of the method changes, you're updating your test, which is a huge code smell for me that your tests are way too brittle. If you feel like every time you update your controller in some minor way, you also have to update your tests, that's defeating the whole point of tests, which is to let us refactor with confidence without having to change our test every time.

Matt Stauffer:
And as you were saying that, I was just thinking, if you refactor the implementation of a particular feature without changing the input or the outputs, your tests shouldn't have to change. And if they do, that's a sign that you're too tightly coupled to-

Taylor Otwell:
You're too coupled to the implementation.

Matt Stauffer:
... testing implementation. Absolutely. Which is interesting because when you say tight coupling, normally people are using it critically of doing something like feature tests. "Your tests are too coupled to the framework." But the problem is using these more broad feature tests, you're testing the input, you're testing the output or whatever, or you're giving input tests and the output, allows you to not be coupled to a particular implementation which allows you to write code, and rewrite code, and rewrite code without constantly rewriting your tests.

Taylor Otwell:
The whole point of test is to of course make sure our code is correct and does what we expect, but also to be able to refactor with confidence and you just can't do that if you have to change the test every time you refactor.

Matt Stauffer:
And it's super interesting because when I did the Valet 3 to Valet 4 rewrite, I was really spending a lot of time leaving the tests, and the Valet tests have to be very mock heavy because they're testing what happens when your system is in a certain state and there's no way... You can't fake the system, or fake the database, or fake the file system, whatever, without doing mocks. And it had just made me realize how much of a smell heavily mocking is, because that ended up with the Valet tests being very tightly tied to implementation and that was just a necessary evil because of Valet, but it just come and helped me realize how far we've come from when that's what all of our web application tests look like. For sure.
One of the things you mentioned as we're going there was you'd like to test... You give a particular input to a route, you test that the response is the way you want and then you test the database afterwards. How often are you going to test the database using assertDatabaseHas versus checking the output on the page that you know reflects the status of the database?

Taylor Otwell:
I actually usually run database queries to make sure, and I honestly rarely use that assertDatabaseHas stuff actually. I just run a database query, expect that account is a certain number or that the right data is in place. Or I'll just, if I already have a model, I'll just call refresh on the model and make sure the data looks correct. I think because a lot of our front ends are inertia based or whatever, I'm not usually inspecting the actual HTML content so much as I'm expecting the data that's being sent back to the front end. As far as the front end, we have a suite of Dusk tests on Nova itself to make sure our front end works the way we expect. But anyway, the database check is the typical flow for me.

Matt Stauffer:
I like that for two reasons. One is that I think if you test the HTML, then now you're tied in this place where what if that HTML changes? Now this test that doesn't have anything to do with that HTML has to change. Whereas if you just inspect the database, you're good. And then you could have a test later that says, "Depending on the status of the database, check to make sure that the HTML is the way you want." That test can be about your output if you want, and it might be in Dusk instead.
But the other thing is, I hadn't thought about this before, but you choosing to use a database query instead of using the assertions that are directly tied to the database means it's clearer, because it's literally the eloquent code we all deal with in a database and that should exist in that database already, but also it's going to be more connected to what the actual practical functional queries are going to be elsewhere. So if you've got scopes or anything like that, those scopes will be reflected in that query in a way they wouldn't be in the assertDatabaseHas. And so you're seeing more of a practical, what actually would come to an average eloquent query as the result of this database being the state it's in right now.

Taylor Otwell:
And now one area that I do, I guess you could call it mocking in a way, is like Bus Fake, Queue Fake, Mail Fake, all the Fakes in Laravel, I do use those pretty heavily. That feels like less brittle to me, and the reason why I use those is, for example, if I'm hitting a controller that queues a job, I'll call a Queue Fake to basically turn off the queue, but I can still make assertions that the job was actually dispatched, it's just not actually going to execute. Because I usually write a separate test for the queue job itself, where I'll actually create the job, call the handle method, make sure it does what it's supposed to do, query the database, again, very similar to my other tests. But I don't usually let that actually execute during the controller endpoint thing. I'll just make sure that the controller dispatch the jobs, I expect it to dispatch and write that test in a different test class.

Matt Stauffer:
And to me that's an output. If you say input and output, well the input is, let's say, a post to a controller, a router, something like that, the output could be the database is modified, the output could be that there's a certain HTTP response sent back to the browser, the output could be a notification was sent or job is queued or whatever. And what you want to do in that controller test or that route test is make sure that output happened. But again, like you're saying here, that doesn't mean you also then want to go test that output. And one of the things I often see in people's tests is that they accidentally end up testing Laravel's code, because they're like, "Test to make sure that this collection mapping works the way I want it to." And I'm like, Laravel has got tests for that already.
And similarly, if you feel like when you dispatch the route, you have to test the whole way through to every single thing that queue job is going to do or every single thing that notification is going to do, you're now expanding the scope of what you're testing, which is not really necessarily... You're getting outside of the scope of what you should be testing. The route should test that the things happen and then by using Bus Fake, by using Queue Fake, by using notification fakes or whatever, you're giving yourself the ability to say, "Test that it was dispatched." And then somewhere else say, "When it's dispatched test to make sure that does this." And now they're handled separately.

Taylor Otwell:
I don't think we were recording yet when we mentioned this, but testing sort of... We're talking about testing the happy path right now, but you also have to test validation failures, error states, and there seems like no end to the amount of tests you can write for all of the invalid scenarios that can possibly happen. And knowing when to stop doing that I think is interesting. And I think there's tools for this, right? I think it's called... Is it mutation testing or something like that where it feeds all the sorts of random data into your test to see if it fails?

Matt Stauffer:
Yeah.

Taylor Otwell:
But anyway, I'm curious what you do there. The things I primarily test when we're trying to test failure states is security related things, like make sure that a user that doesn't own this blog post can't update this blog post, that kind of thing, make sure they can't delete this blog post. Those are the first failure tests we write, especially on something like Forge or Vapor, where we need to be very security conscious. And then on my end, I'll test basic validation errors and make sure I get the validation exceptions that I expect on the right attributes. But of course there's all sorts of other scenarios I'm sure I could test, but I don't find myself writing them. I'm curious if you do anything there, how far do you go down that path?

Matt Stauffer:
It's a fantastic question because I don't know, I feel like there's a part of me that thinks I should test every single thing to make sure it's validated. And sometimes I've done that, especially if it's very, very important for this client for every single piece of data to be tested the way we want. So we'll use usually data providers to make sure that each one is tested individually and each one gives the error you want. Some of the cleanest code I've seen lately tests one valid one to make sure it works and one completely invalid one, and just test to make sure it gets all the errors. And I'm like, "I don't mind that as a stop gap one." You test one that has all the data that you know is required and make sure it actually goes through, and you test one that's literally an empty array. You're posting an empty array and you test to make sure that you get every single validation error that you expect. And then hopefully you're good.
And one of the things that does require you to do is not do any validation that's like a one-off validation that if this one's invalid, you don't even get to the others. And that's one of the things I see people do often is that they get to a very complicated validation rule, instead of writing a validation rule and putting it in the validation array, they'll do a manual check before they hit the validator. And one of the downsides of that is if that one fails, you only get that error, you don't get the other errors. So that's one of the many reasons why I think it's often worth the work of taking an extra 30 minutes to an hour of writing a custom validation rule for that weird edge case versus just doing a manual database check before we're in.
But I completely agree with you though, no matter what I'm doing, I need to make sure, and I always tell people this, in testing, make sure that things that are going to get you fired or the things that are going to into your company in the front page of the New York Times... When I say that, you know what that means for your company, right? Test those things. And so if it is accidentally allowing somebody to charge somebody money they shouldn't or log into somebody else's security thing or whatever, test those no matter what. And then everything else flows down from there.

Taylor Otwell:
That makes sense.

Matt Stauffer:
Do you use data providers at all? It's something that I feel like I should use more than I do. And you mentioning it's easier in Pest, I'm like, "Maybe I should use pest more just so I could use data providers more."

Taylor Otwell:
The ergonomics of using data providers are definitely much better in Pest than in traditional PHPUnit. I wouldn't say I use them a lot, we use them more when testing Laravel itself more than I seem to use them in real applications, I think. I don't know why. Like I said, in this vapor project we had a unit test that does use a data provider, but it's definitely over 95% of our tests I'm sure do not use a data provider.

Matt Stauffer:
And I'm very similar in that if I'm dealing with open source code where I can imagine just all sorts of different user inputs, all sorts of different uses, I'll usually have an array of some sort that says, "Here's all the millions of different ways that could..." And then later when I get a failing test, or I get an issue or something like that, and I'm like, "You didn't cover this one string or that one string," first thing I'm going to do is going to add it to that array and then I'm going to fix the code until that thing is green. And now we know there's one more thing on the stack of things that we're covering in this code. Very, very common. In my code-

Taylor Otwell:
Data providers feel very tied to validation to me, validation scenarios where you need a lot of different data.

Matt Stauffer:
Or like you said... No, even the one you said was a check, it's validating to make sure that this thing is a valid repo. I think that's very common for me. I don't do data providers in checks that got routes against routes. I've seen people do that before where they're like, I'm going to give every single field, like we were mentioning before, every single field is going to have a failing state with all the rest of them having a passing state and I'm going to run through all of it, and I think that's overkill. And I've seen that quite a bit in the last week where I think out of abundance of caution, people will just write every single possible failure state they can possibly imagine as an individual test, and each of those tests is 10, 15 lines long. And although I understand where it comes from, it's too much. It's one of these where I don't have a hard and fast rule, but you know it when you see it and I'm like, "Sometimes there's just too many tests against the same route."

Taylor Otwell:
It's an interesting thing, because some people that are very dogmatic about a hundred percent test coverage over your entire code base, whereas other people I think emphasize more like, test the things that you need to test the most to gain the most confidence out of your application, test the most sensitive parts of your application the most, obviously. And then the more peripheral parts that are not as important maybe, you don't need thousands and thousands of lines of tests over those things if they're not very central to the application necessarily.

Matt Stauffer:
I think that's a great point. Code coverage, it's sometimes obsessed over in ways that aren't healthy. Some companies say every single poll request must increase our code coverage or we are reaching for a hundred percent code coverage. And I think that every single PR must increase our code coverage is okay if you have no code coverage and you're trying to get up to 60, or 70, or something like that. I think that 50% code coverage is good, 60 is better, seventies is better. I feel like when you're getting up with those higher numbers, you really have to obsess over things that it's one of those things where you're satisfying the machine versus the human who's actually going to work with those tests in the future. And while I do think that tests are good for covering our butts, part of what tests should also do is inform and protect and help future programmers working on the project, test this documentation. And when you're looking for a hundred percent code coverage, you're not doing that. You are making it very hard to reason with.

Taylor Otwell:
It's like, what does a hundred percent even mean? It almost feels like a meaningless number. It means that I guess that all of the code in the project was executed at some point during your test suite, but you don't know if it was the right test or if it tested the right thing. And it doesn't mean that all of the other invalid scenarios were also tested. It's not a helpful number in my opinion, because it doesn't really mean much, it doesn't tell you if the tests were actually good, or if they were helpful, or if they even checked the right thing at all, from a business perspective and from a security perspective or whatever. I don't know. I don't think it's a helpful thing because I don't think it means much.

Matt Stauffer:
Totally. And I would say that there's something to be said about relying on existing metrics for testing because we don't all know all the things that can go wrong. There's something to be said, at least looking to external things. And one of the things I remember in the first app I ever built, thankfully it wasn't public, this is in the days of CodeIgniter, where we didn't have the type of protections we have in Laravel. I had a user profile update form and the user ID that you were updating was just a hidden input, so someone could just go in view source in their browser, change that input to be two, hit post and then change it. And thankfully I talked to an older programmer, he is like, "Yeah, I just changed your password. Let's work on this." But the good news is things like that are usually... We're protected from them because of the conventions that are built in Laravel.
A lot of the dumb things that we wouldn't know to test for because we're so junior, are really protected from, unless, this is so funny, unless you're fighting the framework, the more you fight the framework in any way, shape or form, the more you have to know everything. But the more you work with the framework's conventions, the more you can just say, "What do I need to know from a business perspective that my business cares about?" And you can trust that a lot of the potential security concerns aren't going to be there as long as you're working with eloquent, and validation, and all that kind of stuff. Once again, a pitch for people to not fight the tools, don't fight the framework, but also an idea that to me as someone who says, "The point of software is value to people in our businesses," then the validation should be validating that the bad thing doesn't happen to the business and the good thing does happen. The user doesn't have a bad experience and they do have a good experience. And those are really the things that we're checking for.
And it should be clear. One of the reasons that people... Started rant here a little bit, but when people criticize Tailwind from a CSS perspective, they say, "There's so many classes." But the thing is, I have built many CSS based structures, I have talked about OOCSS and all these CSS based structures pre-Tailwind, and what I found in one of the reasons that I came around to Tailwind was that when I came back to a code base two years later, no matter how well I'd architected that code base originally, not only was I not fully able to follow the CSS, but other people who came along had at some point said, "I don't know where the heck this is supposed to go. I'm just going to put it in an extra CSS file." When people don't understand the existing systems, they don't work with them. They don't say, "I'm going to go learn this whole system and figure out where it should go." They just put crap in places you don't expect it to be, or where it shouldn't be. So when people can't reason with it-

Taylor Otwell:
This is the thing I find most ironic about the Tailwind criticism on the Twitter sphere, the X-sphere, or whatever it's called these days, is that you see people say, "Will it scale? Is it maintainable?" And the ironic thing about it is that's precisely why it's so good, is because it does scale and it is maintainable.

Matt Stauffer:
Yes, it is maintainable.

Taylor Otwell:
Unlike all these other custom CSS solutions that we've concocted.

Matt Stauffer:
And to me it's the same thing here with these tests. If I have a hundred percent coverage test suite that has thousands of tests, because you had to catch every single thing, I'll have no idea where to look when I add something, edit something. Now granted tests give you a little bit more than CSS does because you can see when the test fails, but still, if I don't have the ability to understand the code base of tests, then those tests now become just a hard and fast fixed thing that I can't change, I can't modify. If I write code that breaks it, I'm just going to just underwrite that code. You know what I mean? What I want is tests that I can understand all of them, I can read through them. Literally when I do code review, the test directory is the first place I go. Because in a well-written code base, the test directory is going to tell me what the thing does, what the thing is not supposed to do. That's it.
All right. I had one last topic for us and then we're at our 30-minute mark. The last topic was traits versus inheritance. And there's other ways of talking about this topic, but one of the things that's come up often in PHP but then also in CSS and everything like that is when we're talking about final classes, which has come up a lot lately, because there's some drama about that lately. It's people who are very, very anti inheritance and very pro-mix-in, or trait, or whatever you want to call it. And the thing that I wanted to talk about is the fact that in Laravel we use both a lot. There's a lot of traits and there's a lot of inheritance, and I also know that in the past you've talked about the fact that you think that this is a little bit of different conversation in open source code versus individual application code.

Taylor Otwell:
Yes, a hundred percent.

Matt Stauffer:
Can you tell me a little bit about your thoughts about traits versus inheritance?

Taylor Otwell:
Yeah. I think people get the impression that we use inheritance all over the place in Laravel, which is definitely not the case. And I think I'll start in application code. I actually very rarely find myself using inheritance when writing application code and project. It's pretty unusual, I would say, that I'm ever extending a class that's also in my application. It just doesn't seem to come up much. I don't know why. Occasionally used traits, even that it's not super common, I would say, that I'm using traits, that aren't built into Laravel. I'm talking about my own traits that I write that I'm using in other classes. I think it definitely happens some, but it's not something I'm just reaching for all the time, whereas an open source code, I think inheritance is useful in certain scenarios. So the things that come to mind are very driver-based things like the cash drivers, the Q drivers, the database drivers.
Or you have a lot of shared functionality across this driver system where you have a Redis driver and then Cache driver, and they do very similar things, and you just want to share some code between them. I think with the advent of traits, some of that extension became unnecessary, but it was already written at that point, because it was already built into Laravel. Would I use inheritance in those situations? Now if I was writing Laravel from scratch, maybe it wouldn't be necessary, I could just use a couple traits to share some helpful methods with those classes. But again, I agree that using inheritance all the time in your application code is weird, or at least it doesn't seem to come up a lot for me.
We definitely use interfaces sometimes. For example, in Laravel Forge you can connect your account to GitHub, Bitbucket, GitLab, and those source control providers. We have an interface called source control provider, and it has methods like Git commit hash, Git repository. I don't think those extend anything, they're just an interface that we use. I don't know about you. Do you find yourself using inheritance, and I mean outside of extending eloquent model? Do you find yourself using inheritance in application code a lot?

Matt Stauffer:
No, and I was actually going to mention the same thing you did about interfaces. I found that a lot of times that we would be tempted to reach for inheritance it turns out an interface does the job. When I would be most likely to do it is, let's say a source control provider is technically like an interface because you want to be able to take a generic source control provider throughout your app, and every single one of these is going to have these three methods, you can build an interface for them. But what also is the case is 90% of them are going to rely on this one private method that makes it really easy to do something, or 90% of them have the exact same way of parsing a Git URL, and only 10% of them have a custom way. Then what I'll probably end up doing is do a base abstract class that's source control provider that they all extend, and then each of them, A, can rely on those private... Protected, not private, those protected methods that those base one provides, like little helpers or whatever.
And then B, only the ones with the custom stuff are the ones that have to customize that method, but for the method that they're all sharing that same method. Again, let's say it's one that parses the Git URL or something like that, you don't even have to write that method, it can just inherit it from its parent. I know Takeout is open source, so it's not quite the same, but Takeout has the same concept of every single feature that you can install with Takeout, whether it's MySQL, or MSSQL, or whatever else, 98% of the time they have the same ways of connecting to Docker, and they have the same ways of building the string that you're using. Every once in a while there's those edge cases. So I don't want to have to... And what I could have done was say, "We've got a trait for building that."
They all implement the same interface they build into trait, that's the way that 90% of them do it. What I've found is it's easier for my brain to reason about, "I've got one main way of doing it." And if you ever see this method in a specific implementation, that's the one that's changing it versus they've all got this trait. But that said, when it's not that circumstance, I can't tell you when we use inheritance. We do use traits. We use interfaces of reasonable amount, but I feel like the people who really pitch interfaces a lot have a bigger idea of the value of interfaces than we do, because it has to be a circumstance like that where you're like, we have multiple implications, probably going to have more in the future of a thing. And it's not that common of a use case in application code in my personal experience.

Taylor Otwell:
Mine either. It's just not something that seems to come up a lot or seems to make sense. But during the unfinalized drama, it felt like people had the idea that just it extends everywhere. We're just inheritance all over the place, and that's just definitely not the case.

Matt Stauffer:
Totally. I think that's my last one for today, but one of the things I remembered is, in the past when it was you and me and a couple other guys, podcasting, I always had a fun question at the end and I want to start bringing that back.

Taylor Otwell:
I forgot about that.

Matt Stauffer:
I know, right? And I had just remembered that I, and this is a little bit of a leading question, but that's okay, I had just seen pictures of you and your family going up to Vermont for fall, everything, and I wanted to ask you, what is your favorite season?

Taylor Otwell:
Gosh. Honestly, probably summer overall, because we can hang out by the pool and have people over and we're outside. I also like these in-between times, like fall and spring where the temperatures are mild and it's super nice to be outside. I think winter is probably the whole family's least favorite season.

Matt Stauffer:
Got it.

Taylor Otwell:
Because we're just not getting much outside time, it's cold, it's dreary. Abigail and-

Matt Stauffer:
You guys, love the outside.

Taylor Otwell:
Yeah, I guess. I do like it during the summer. Now that we live by the lake and stuff, it is nice to be out there. Abigail hates the winter, hates the winter. And I feel like winters in Arkansas are fairly mild, it could be much worse compared to say you're in New York or up north or something, but does not like it. What about your family?

Matt Stauffer:
I'm a fall guy, man. I'm autumn, and I didn't know that my kids [inaudible 00:38:10] that too. I grew up in Michigan, and so every single fall... We've come out of summer, summer's wonderful. Every single fall it starts cooling down, the leaves start changing colors, the oldest cider mill in Michigan is in my hometown, I drove drive past it every single day, so we'd go get apple cider donuts, we'd go get apple cider and there would just be Halloween events and you're getting to... I'm a big guy, I get hot very quickly, so I'm actually able to put clothing on that I like the way it looks versus just spending all summer being like, "What can I put on that keeps me sweating as little as possible?" And I just have so much nostalgia for it. And then I moved to Florida where there is no fall, there is no in-between, it's just hot and a little bit less hot.
The seasons don't change, you don't get a snowy Christmas, and I took... My degree was in English, but my focus was on creative writing, so I took a poetry class where I literally wrote about fall for an entire semester and all of my poems were about fall. And thank God the poetry professor was also from... She's from, what do they call it? New England. She's from New England somewhere, so she was also nostalgic, so I did really well in that class, but I was like, "Oh my God, I have such strong feelings about how much I miss it." To the point that at one point during my time at Tighten, I was moaning about it and Dave Hicken, who lives in Connecticut, bought a box of apple cider donuts and mailed it to me just so I could have them.

Taylor Otwell:
Nice.

Matt Stauffer:
I didn't know my kids loved it so much, and I was talking to them about fall and it's just starting to hit fall this week in Georgia, and they were like, "Let's go up to North Georgia and go to the apple things." Sorry to keep telling stories, but last weekend... I think two weekends ago, my fiance is from Omaha and she's been telling me about this pumpkin patch for ages called Vala's Pumpkin Patch in Omaha. She's over there celebrating it. And she's been telling me about for ages. And so for my birthday we went up, I met a whole bunch of her family, but then me and her immediate family, who I know really well, we all went to this. And it's not a pumpkin patch, Taylor. It is an apple cider mill, it is a pumpkin patch, it is like the little train rides.

Taylor Otwell:
It's a fall theme park, basically.

Matt Stauffer:
It is a fall theme park. And it's so funny, because each of them was like, "When I moved away, I learned that pumpkin patches everywhere are literally just pumpkin patches." It was a 15-minute hay bale ride just to get to the pumpkin patch, because the pumpkin patch itself was so huge. We spent an entire day drinking apple cider, riding little go-karts and stuff like that. I'm like, "This is heaven. This is the best birthday trip I could possibly take." I'm a nut for fall.

Taylor Otwell:
That's cool.

Matt Stauffer:
Turns out my whole family's the same way too.

Taylor Otwell:
That's really cool.

Matt Stauffer:
Before we're done, I just wanted to ask, what is summer in Arkansas like? Is it in the 80... Sorry, Fahrenheit everybody, but is it in the 80s all summer or is it-

Taylor Otwell:
Way hotter than that.

Matt Stauffer:
Really?

Taylor Otwell:
It's like over a hundred all summer.

Matt Stauffer:
Is Arkansas dry or is it humid?

Taylor Otwell:
It's 80 now. You know what I mean? This feels good.

Matt Stauffer:
October 10th is 80. Is it dry or is it humid?

Taylor Otwell:
It was 80 yesterday.

Matt Stauffer:
Okay.

Taylor Otwell:
It's very humid in the spring. In the summer it does dry out a bit, but it can still get humid in the summer, which pushes the heat index way up to 115 or something like that.

Matt Stauffer:
Thank God you got a pool, but I can't believe... I know that you guys love summer, but I was assuming, "They must love summer because it's not that hot there." That's very hot.

Taylor Otwell:
That's very hot.

Matt Stauffer:
You're a better person than me.

Taylor Otwell:
A couple of years ago where I lived in Death Valley, where the hottest places in the United States, on that day, honestly, it was pretty crazy. It was like 116 degrees.

Matt Stauffer:
I was sitting here being like, "I'm in Georgia where it's really hot." No, that's horrible.

Taylor Otwell:
Yeah.

Matt Stauffer:
Cool. Well, Taylor, thank you so much for hanging out on us again, and thank you to all of y'all for listening to us. If you have questions and things you want us to talk about, make sure to just tag us on Twitter @LaravelPodcast, or @stauffermatt, and @taylorotwell. And I know that it's supposed to be called X, but I'm never changing, it's Twitter forever. And until then, we'll see you all next time.

Taylor Otwell:
All right, see you.

Creators and Guests

Matt Stauffer
Host
Matt Stauffer
CEO Tighten, where we write Laravel and more w/some of the best devs alive. "Worst twerker ever, best Dad ever" –My daughter
Taylor Otwell 🪐
Host
Taylor Otwell 🪐
Founded and creating Laravel for the happiness of all sentient beings, especially developers. Space pilgrim. 💍 @abigailotwell.
Volt Breeze, Testing, Traits, & Inheritance
Broadcast by