The essential news about content management systems and mobile technology. Powered by Perfect Publisher and XT Search for Algolia.
The News Site publishes posts to the following channels: Facebook, Instagram, Twitter, Telegram, Web Push, Tumblr, and Blogger.
This week, the Laravel team released v10.30, which includes the ability to dispatch events based on a database transaction result. This week's release saw a lot of minor fixes, added tests, and miscellaneous changes. See the changelog for a complete list of updates.
Mateus Guimarães and Taylor Otwell collaborated on dispatching an event based on the result of an in-progress database transaction:
What this PR aims to do is to make the event itself aware of transactions. So, if a transaction fails, the event doesn't even get published. That way, it doesn't matter if the listeners are queued or not or if they have afterCommit enabled, and you can ensure, in the tests, that the event did not get published.
Thanks to this contribution, you can now add the
ShouldDispatchAfterCommit
interface to an event, which
instructs the event dispatcher to hold off on dispatching the event
until the transaction is committed; if the transaction is rolled
back, the event does not fire.
Here's a contrived example of how it might work—given the following transaction and dispatch amid the transaction:
DB::beginTransaction();
Log::info("Transaction started");
$order = Order::create(['amount' => 5000]);
// More stuff...
Log::info("Dispatching OrderCreated event");
OrderCreated::dispatch($order);
Log::info("Closing transaction");
DB::commit();
Here's what the logs might look like:
local.INFO: Transaction started
local.INFO: Dispatching OrderCreated event
local.INFO: Closing transaction
local.INFO: Order created event handled...
And finally, the event might look like the following:
use Illuminate\Contracts\Events\ShouldDispatchAfterCommit;
class OrderCreated implements ShouldDispatchAfterCommit
{
// ...
}
Along with ShouldDispatchAfterCommit
, the pull request expanded to include other
interfaces like ShouldHandleEventsAfterCommit
for
listeners and ShouldQueueAfterCommit
, which may be
implemented on jobs, listeners, mail, and notifications.
Mior Muhammad Zaki contributed test improvements, getting Laravel compatible with the future release of PHPUnit 11—see Pull Request #48815 for details.
You can see the complete list of new features and updates below and the diff between 10.29.0 and 10.30.1 on GitHub. The following release notes are directly from the changelog:
artisan migrate --pretend
command 🚀 by @NickSdot in https://github.com/laravel/framework/pull/48768QueriesRelationships[@getRelationHashedColumn](https://github.com/getRelationHashedColumn)()
typehint by @cosmastech in https://github.com/laravel/framework/pull/48847The post Dispatch Events after a DB Transaction in Laravel 10.30 appeared first on Laravel News.
Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.
Read more https://laravel-news.com/laravel-10-30-0
If you’ve ever been curious about Sentry, Launch Week is for you. Sentry will be announcing new products, showing exclusive demos, and talking all things developer, every. single. day. (For one week)
Here is the schedule, and all talks will be live at 9 AM PT each day:
Reserve your spot and be instantly entered to the daily raffle for exclusive prizes.
The post Register now for Sentry Launch Week! appeared first on Laravel News.
Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.
Read more https://laravel-news.com/register-now-for-sentry-launch-week
Adding a second server to your app can be a great way to improve your app's performance and/or increase its reliability. However, there are a couple of things you need to keep in mind when adding a second server.
In this article, we'll discuss the key things you need to consider when adding an additional server to your app. We’ll use a Laravel hosted in Laravel Forge as the example here, but the concepts can be applied to any kind of application, not even limited to the PHP language.
First, to make sure we are speaking the same language, this is the outline of the current infrastructure. This app is currently running on a server created by Laravel Forge and running on AWS.
The first thing you will need is a load balancer. This will be the entrypoint of your application, meaning you will point your domain DNS to the load balancer instead of the server directly. The job of a load balancer is, as you guessed, to balance the incoming requests between all the healthy and registered servers.
From now on, every time we mention “App Server”, this will be referring to a single server running our Laravel application.
One of the nice features of a load balancer is the health checks, which serve the purpose of making sure that all connected servers are healthy. If one of the servers fails for some reason, some unscheduled maintenance for example, the load balancer will stop routing requests to that server until the server is up, running, and healthy again.
We recommend using the application load balancer, which gives more robust functionality down the road, if you need it. Application load balancers can route traffic to specific servers based on the requested URL and even route requests to multiple applications. For now, we will have it evenly balance traffic using the round robin method.
Since your domain will now be pointing to the load balancer, your SSL certificate should also be in the load balancer now, instead of in your servers.
Currently, there is one server running our app, local instances of MySQL, and Redis. What happens when the second gets attached to our load balancer?
Having multiple sources of truth for our database and caching layers could generate all kinds of issues. With multiple databases, the user would be registered in one server but not the other. With one Redis instance per server, you could be logged in into App Server 1, but when the load balancer redirects you to App Server 2 you would have to sign in again, since your session is stored in the local Redis instance.
We could make App Server 2, or any future App Servers connected to our load balancer, connect to App Server’s 1 services, but what happens when App Server 1 has to go down for maintenance or it unexpectedly fails? One of the reasons to add a second server is to have more reliability and scalability, which does not solve our problem.
The ideal scenario, when we have multiple app servers, is to have external services like MySQL and Redis running in a separate environment. To achieve this, we can use managed services, like AWS RDS, for databases and AWS Elasticache for Redis or unmanaged services, meaning we are going to set up a separate server to run those services ourselves. Managed services are usually a better option if cost is not an issue since you don’t have to worry about OS and softwares upgrades, and they usually have a better security layer.
Let’s imagine we decided to go with managed services for our application. Our Laravel configuration would become similar as this:
-DB_HOST=localhost
+DB_HOST=app-database.a2rmat6p8bcx7.us-east-1.rds.amazonaws.com
-REDIS_HOST=localhost
+REDIS_HOST=app-redis.qexyfo.ng.0001.use2.cache.amazonaws.com
After everything is set up, our infrastructure would look like this when connecting our App Servers to our services.
Our application allows users to upload a custom profile picture, which shows up when you are logged in. On our current infrastructure, images get saved in an internal folder in our application and also get served from there. Now that we have multiple App Servers, this would be an issue, since the images uploaded in the App Server 1 will not be present on the second server.
There are a few ways to solve this. One of them is to have a shared folder between your servers (Amazon EFS, for example). If we choose this option, we would have to configure a custom filesystem in Laravel which would point to this shared folder location on our App Servers. While a valid option, this requires some knowledge to set up the disk on the servers, and for every new server you set up, you would have to configure the shared folder again.
We usually prefer using a Cloud Object Storage service instead, like Amazon S3 or Digital Ocean Spaces. Laravel makes it really easy to work with these services, if you are using the File Storage options. In this case, you would only have to configure your filesystem disk to use S3, and upload all your previous user uploaded content to a bucket.
-FILESYSTEM_DISK=local
+FILESYSTEM_DISK=s3
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=your-bucket-name
All your user uploaded content will be stored in the same, centralized bucket. S3 has built in versioning, multiple layers of redundancy and any additional app servers we add to our load balancer can use the same bucket to store content.
If your application grows in the future, you can set up AWS Cloudfront, which acts as a CDN layer sitting on top of your S3 bucket, serving your bucket content faster to your users and often cheaper than S3.
In step 2, we set up a centralized Redis server, which is the technology we were using to manage our application queues. This will also work for our load balanced applications, but there are a few good options to explore.
If you continue to process your queues on your app servers leveraging the centralized Redis instance, no changes need to be made. The jobs will get picked up by the server that has a worker available to process a job.
Another option is to use a service like AWS SQS, which can relieve some pressure on your Redis instance as your application grows by offloading that workload to another service.
When running multiple servers behind a load balancer, scheduled commands would run on each server attached to your load balancer by default, which is not optimal. Not only would running the same command multiple times be a waste of processing power, but could also cause data integrity issues depending on what that command does
Laravel has a built-in way to handle this scenario so that your scheduled commands only run on a single server by chaining a onOneServer() method.
$schedule->command('report:generate')
->daily()
->onOneServer();
Using this method does require the use of a centralized caching server, so Step 2 is critical to making this work.
When it comes to deploying your application, you now have so many options and things to consider.
We can still deploy our applications using our previous approach, but now we have to make sure we remember to click the deploy button on both servers. If we forget, we would have our servers running different versions of the application, which could cause huge issues.
With multiple servers, it’s probably time to level up the deployment strategy. There are some very good deployment tools and services out there, like Laravel Envoyer or PHP Deployer. These types of tools and services allow you to automate the deployment process across multiple servers, so you can remove human error from the equation.
If we want to go one level deeper in our deployment process, since we now have 2 app servers, one of the great benefits is that we can temporarily remove one of the servers from the load balancer, and that server will stop receiving requests. This allows us to have zero downtime deployment, where we remove the first server from the load balancer, deploy the new code, put it back into the load balancer, remove the second one and do the same process again. Once server 2 is finished, both servers will have the new code and will be attached to the load balancer. To achieve this, we would use tools like AWS CodeDeploy, but the setup is more complex than our previous options.
Deployment is a very important process of our applications, so if we can automate the deployment using Github Actions or any CI/CD services out there, we are greatly improving the process. Making the deployment process simple and where anyone can trigger a deployment really shows the maturity of the development team and the application.
One additional benefit we have with the use of a load balancer is that our servers are not the entrypoint of our websites anymore. This means we can only have our servers be internally accessible and/or restricted by specific IPs (our IPs, Load Balancer IPs, etc). This greatly improves the security of our servers since they are not directly accessible. The same can (and should) be done for our database and cache clusters.
To achieve this, we are going to only allow traffic to port 22 from our own IPs (so we can SSH into the server) and we are going to only allow traffic to port 80 from the load balancer, so it can send requests to the server. The same rules apply for our database and cache clusters.
There are a lot of things to consider when adding additional servers to your infrastructure. It adds more complexity to your infrastructure and workflows, but it also increases the reliability and scalability of your application as well as improves your overall security.
When considered from the beginning of the process, these recommendations are simple to implement and can have a large impact on improving your app.
The post 7 Tips for Adding a Second Server to your App appeared first on Laravel News.
Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.
Read more https://laravel-news.com/adding-a-second-server-to-your-app
Mailcoach, the email marketing platform by Spatie just announced a new version that includes split testing, MJML support, Livewire 3, data tables, and more.
When creating email campaigns, you might wonder which subject line would result in the most opens? Or you may be trying to figure out which copy of a link in your mail would result in the most clicks.
Instead of making those decisions beforehand and hoping for the best, you can now use Mailcoach's flexible split testing. You might also know this as A/B testing, but as Mailcoach allows for more than two variations, we call it split testing.
All lists of data in Mailcoach now uses Filament's feature-reach Table Builder component.
To make crafting emails a lot more enjoyable, the folks at Mailjet created a solution called MJML, which stands for "Mailjet Markup Language." It's an easy-to-use abstraction layer over HTML.
Mailcoach now supports MJML out of the box in your templates and email campaigns, and you get code completion suggestions in the editor.
The new suppression list feature is a list of email addresses that will never receive your email.
You can manually add people to this list. Mailcoach also automatically adds an entry for any email that hard bounces. A hard bounce is usually caused by an email address not existing (anymore), and our thinking is that none of your lists should try sending to that address anymore.
Behind the scenes, Mailcoach's hosted and self-hosted versions extensively use Livewire to make the UI interactive.
If you're using the hosted version, this is just a minor detail that doesn't affect you much. However, many users of the self-hosted version requested support for the latest and greatest version of Livewire, v3. We're happy to share that we now use v3 of Livewire everywhere.
In addition to having rewritten our components to use the Livewire v3 specific features, we're also using Livewire's new navigation to make the UI feel more speedy.
In addition to those prominent features, they've also made many small improvements.
You can read more about the new features in their release announcement.
The post Mailcoach now includes split testing, MJML, Livewire 3 support, and much more appeared first on Laravel News.
Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.
Read more https://laravel-news.com/mailcoach-next
NaNoWriMo, or National Novel Writing Month, is an annual event where writers from all over the world challenge themselves to write a novel in just one month. It’s a celebration of creativity, determination, and the power of storytelling.
In addition to sponsoring NaNoWriMo, WordPress.com is offering a special gift to this year’s participants as a way to reward your efforts in this exciting challenge. Read on for more information.
WordPress was born from the desire to help anyone tell their story, so we’ve always had a close bond with authors. Below are a number of ways a WordPress.com site can help you on your writing journey.
WordPress pages and posts are flexible enough for you to write the entire novel on your site. Here’s how you can structure your site to make the writing process seamless. See this guide for step by step instructions.
Define how and where your content will be organized. Whether it’s chapters, characters, settings, or any other category, WordPress.com provides the flexibility you need.
Each post can represent a chapter, and you have full control to edit or re-order your content as you see fit.
Let the world read your masterpiece or only share it with a select audience. You decide! Utilizing a paywall or newsletter setup may work well for your goals.
But what if you don’t want to write the actual novel within your blog? How can a WordPress.com blog help in that case? In plenty of ways, actually.
One of the most powerful tools an author has in their arsenal is anticipation. Building excitement and intrigue around your book, even before its release, can be a game-changer. And what better way to do this than with a WordPress.com website? Here’s how:
Share short snippets from your novel on your blog. These tantalizing glimpses into your plot can pique the interest of potential readers and keep them coming back for more.
Everyone loves a well-developed character. Use your blog to discuss the backstories of some of your characters. By giving readers a deeper understanding of your characters’ motivations and histories, you can get them invested in their journeys even before they pick up your book.
Keep your readers in the loop by announcing your progress as you write. Whether it’s hitting a word count milestone or completing a particularly challenging chapter, sharing these moments can build a sense of community and excitement.
Allow readers to sign up for newsletters or alerts. They can be the first to know about your novel’s release date, receive exclusive content, or even get special discounts.
If you’re planning book signings, readings, or other events, your blog is the perfect place to let your readers know. It’s a great way to meet your fans in person and build a stronger connection with them.
Use the comments section of your blog to engage with your readers. Answer their questions, discuss plot theories, or simply thank them for their support. This two-way interaction can foster a deeper connection between you and your audience.
In essence, a WordPress.com blog isn’t just a platform for sharing your work; it’s a dynamic space where you can engage with your readers, build anticipation, and create a community around your novel. So, as you embark on your NaNoWriMo journey, remember that your website can be an invaluable companion, helping you connect with readers every step of the way.
To support all the budding authors out there, we’re offering a special 20% discount on the first year of an annual plan. So, if you’ve been thinking about starting a blog or website, now is the perfect time. Check out our Plans & Pricing for more details, and use the coupon code nanowrimo2023 to claim your discount.
Whether you’re a seasoned author or just starting out, WordPress.com is the perfect platform to support your writing journey. We’re excited to be a part of NaNoWriMo 2023 and can’t wait to see all the incredible stories that will emerge from this challenge.
Happy writing!
Learn moreRead more https://wordpress.com/blog/2023/10/30/nanowrimo-2023/