Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 21 hours 38 min ago

Yusef Blog: Update Drupal Automatically - new feature

30 October 2019 - 7:31am
Auto-update Drupal core and modules always have been a high demand in Drupal. hopefully, in the Amsterdam 2019 DrupalCon , Automatic Updates modules has been introduced.  a great module but in begging on its way. but Automatic updates is a great feature in drupal. it has been introduced as a contrib module, but it will finally go to the core of Drupal. as you can see in its page in will developed in two phases.  
Categories: Drupal

Joachim's blog: Multi-site search using Feeds and SearchAPI

30 October 2019 - 6:08am

[This is an old post that I wrote for System Seed's blog and meant to put on my own too but it fell off my radar until now. It's also about Drupal 7, but the general principle still applies.]

Handling clients with more than one site involves lots of decisions. And yet, it can sometimes seem like ultimately all that doesn't matter a hill of beans to the end-user, the site visitor. They won't care whether you use Domain module, multi-site, separate sites with common codebase, and so on. Because most people don't notice what's in their URL bar. They want ease of login, and ease of navigation. That translates into things such as the single sign-on that drupal.org uses, and common menus and headers, and also site search: they don’t care that it’s actually sites search, plural, they just want to find stuff.

For the University of North Carolina, who have a network of sites running on a range of different platforms, a unified search system was a key way of giving visitors the experience of a cohesive whole. The hub site, an existing Drupal 7 installation, needed to provide search results from across the whole family of sites.

This presented a few challenges. Naturally, I turned to Apache Solr. Hitherto, I've always considered Solr to be some sort of black magic, from the way in which it requires its own separate server (http not good enough for you?) to the mysteries of its configuration (both Drupal modules that integrate with it require you to dump a bunch of configuration files into your Solr installation). But Solr excels at what it sets out to do, and the Drupal modules around it are now mature enough that things just work out of the box. Even better, Search API module allows you to plug in a different search back-end, so you can develop locally using Drupal's own database as your search provider, with the intention of plugging it all into Solr when you deploy to servers.

One possible setup would have been to have the various sites each send their data into Solr directly. However, with the Pantheon platform this didn't look to be possible: in order to achieve close integration between Drupal and Solr, Pantheon locks down your Solr instance.

That left talking to Solr via Drupal.

Search API lets you define different datasources for your search data, and comes with one for each entity type on your site. In a datasource handler class, you can define how the datasource gets a list of IDs of things to index, and how it gets the content. So writing a custom datasource was one possibility.

Enter the next problem: the external sites that needed to be indexed only exposed their content to us in one format: RSS. In theory, you could have a Search API datasource which pulls in data from an RSS feed. But then you need to write a SearchAPI datasource class which knows how to parse RSS and extract the fields from it.

That sounded like reinventing Feeds, so I turned to that to see what I could do with it. Feeds normally saves data into Drupal entities, but maybe (I thought) there was a way to have the data be passed into SearchAPI for indexing, by writing a custom Feeds plugin?

However, this revealed a funny problem of the sort that you don’t consider the existence of until you stumble on it: Feeds works on cron runs, pulling in data from a remote source and saving it into Drupal somehow. But SearchAPI also works on cron runs, pulling data in, usually entities. How do you get two processes to communicate when they both want to be the active participant?

With time pressing, I took the simple option: define a custom entity type for Feeds to put its data into, and SearchAPI to read its data from. (I could have just used a node type, but then there would have been an ongoing burden of needing to ensure that type was excluded from any kind of interaction with nodes.)

Essentially, this custom entity type acted like a bucket: Feeds dumps data in, SearchAPI picks data out. As solutions go, not the most massively elegant, at first glance. But if you think about it, if I had gone down the route of SearchAPI fetching from RSS directly, then re-indexing would have been a really lengthy process, and could have had consequences for the performance of the sites whose content was being slurped up. A sensible approach would then have been to implement some sort of caching on our server, either of the RSS feeds as files, or the processed RSS data. And suddenly our custom entity bucket system doesn’t look so inelegant after all: it’s basically a cache that both Feeds and SearchAPI can talk to easily.

There were a few pitalls. With Search API, our search index needed to work on two entity types (nodes and the custom bucket entities), and while Search API on Drupal 7 allows this, its multiple entity type datasource handler had a few issues to iron out or learn to live with. The good news though is that the Drupal 8 version of Search API has the concept of multi-entity type search indexes at its core, rather than as a side feature: every index can handle multiple entity types, and there’s no such thing as a datasource for a single entity type.

With Feeds, I found that not all the configuration is exportable to Features for easy deployment. Everything about parsing the RSS feed into entities can be exported, except the actual URL, which is a separate piece of setup and not exportable. So I had to add a hook_updateN() to take care of setting that up.

The end result though was a site search that seamlessly returns results from multiple sites, allowing users to work with a network of disparate sites built on different technologies as if they were all the same thing. Which is what they were probably thinking they were all along anyway.

Tags: drupal planetsearchFeeds
Categories: Drupal

Lullabot: Managing Authentication During API Migrations

30 October 2019 - 5:16am

In the article, Hide Your Keys, Hide Your Access, I discussed using environment variables as a way to keep access credentials and sensitive data out of your code repository. Now, let's take a look at how environment variables can be used during API migrations.

For the purposes of the following examples, let's assume the .env file defines the following variables:

Categories: Drupal

Specbee: An Introduction to the Meta tag module in Drupal 8

30 October 2019 - 4:10am
An Introduction to the Meta tag module in Drupal 8 Gurukiran 30 Oct, 2019 Top 10 best practices for designing a perfect UX for your mobile app

A website without Meta tags is like a shop without a signboard. With Meta tags, you can showcase what your business/website is about without giving out too many details. Something that can captivate your audience and make them want to click to know more about your website. The Meta tag module in Drupal 8 lets you easily and dynamically create and customize various meta tag elements that can help you improve your search engine optimization (SEO) ranking.

Meta tags have been around for a very long time and play a significant role in optimizing your website for search engines. You can reach your audience organically when you set up meta tags the right way. 

So, how can you improve your Drupal website’s SEO with meta tags while being able to create or modify them as you like? Easy – Leverage the Meta tag module for Drupal 8. 

Meta tag Module for Drupal 8

With the Meta tag module, adding structured metadata about your website is easy. In addition to Meta tags for Keywords and Description, you can also customize content you want to display for each of your social media networks. It supports meta tags for Open Graph Protocol, Twitter Cards, Dublin Core and much more. The inclusion of Drupal Console integrations helps developers to create various custom meta tags too.

Getting Started: Installing the Meta tag Module

Meta tag module requires you to install the tokens and ctool modules.
You will need to download and install the meta tag module from Drupal.org or download it through the composer dependency manager. 

Enabling Meta tag module

Go to Extend Search for meta tag and enable it. Along with meta tag, you can also install the meta tag extensions such as open graph, twitter cards.

Configuring the Meta tags

Go to Configuration. Click on Meta tag under Search And Metadata.

By default, you will get these set of types. Global will be applicable to all your pages. You can create your own Default meta tags for a content type or taxonomy terms and configure it according to your requirements.

Basic Tags

These are the foundation tags in meta tags which are very effective in improving your Drupal website’s SEO.

Page title will set the title of the page. While the title tag doesn’t start with "meta", it is in the header and contains information that's very important to SEO.
Description will let you give a brief description of the page. It is used by search engines in the search results to display a brief description about the page.

Advanced Tags

These are optional tags that you can use for improved SEO results.

Geographical tags give the information related to the location. Canonical URL will tell the search engines that the certain URL could have duplicate content and need not be displayed in the search results. Robots will give the option to configure how you want your site links to be seen by the search engines, preventing google translation, disabling the search engine indexing etc. Image is a URL to a unique image representing the content of the page.

Open Graph Meta tags

Open Graph meta tags helps with sharing content on social networks like Facebook, Pinterest, and others. The site name, title, image and description will be the brief content show while sharing. Here you can specify the type of image or video. It is important that you specify the image width and height.

Meta tags are extremely important for improved SERP. With Drupal, you don’t have to hardcode meta tags anymore. The Meta tag module allows you to dynamically create and modify over hundreds of meta tag elements. 

Drupal Planet Shefali ShettyApr 05, 2017 Subscribe For Our Newsletter And Stay Updated Subscribe

Leave us a Comment

  Shefali ShettyApr 05, 2017 Recent Posts Image An Introduction to the Meta tag module in Drupal 8 Image How to create a custom Layout builder in Drupal 8 Image Getting Started with Layout Builder in Drupal 8 - A Complete Guide Explore Our Drupal Services TAKE ME THERE Featured Success Stories

Know more about our technology driven approach to recreate the content management workflow for [24]7.ai

link

Find out how we transformed the digital image of world’s largest healthcare provider, an attribute that defined their global presence in the medical world.

link

Develop an internal portal aimed at encouraging sellers at Flipkart to obtain latest insights with respect to a particular domain.

link
Categories: Drupal

Promet Source: Drupal Migration Does Not Have to Be Scary

30 October 2019 - 3:42am
It's the season for celebrating scary stuff -- ghosts, goblins, ghouls, haunted houses and the whole host of night creatures. 
Categories: Drupal

Amazee Labs: DrupalCon Amsterdam Day 2 Recap

30 October 2019 - 2:38am
Let's take a glimpse into the second day of DrupalCon Amsterdam 2019.
Categories: Drupal

ComputerMinds.co.uk: Preparing your website for Black Friday

29 October 2019 - 8:18am

Does your site have an online store? Are you looking to have sales on during the Black Friday and Cyber Monday period? Then read on - here are some quick tips to help you make the most of the sales period by ensuring that everything goes as smoothly as possible to make sure both your marketing and site are up to scratch.

Marketing / Promotion

You should have a plan in place for marketing your deals, otherwise how are people going to find out about them? This could include; promotions on your own website, promotion via social media (e.g Facebook, Twitter, Instagram) and also via email.

Social media posts and emails should be planned out in advance and posted regularly enough in the run up so that people are fully aware about when your sale will be taking place and what offers they might be able to get (be careful not to post *too much* and annoy people though!).

Having promotional material on your site homepage is a great idea as it’s the first page a lot of customers will enter the site through. Effective promotional content present in the run-up to the sales period should help ensure that you are maximising the potential number of customers that will return to check out your sales. You should ensure that any promotional messages or content remain in place until after you have finished your sales.

Analytics - Sales goals

If your site was launched at least a year ago and you have Google Analytics in place (you’d be crazy not to, right?) then you should hopefully have some reliable data that you can look at from the same sales period last year, in order to better prepare and improve upon for this year.

  • What pages / products were the most popular last year?
  • Was there anything in particular about these pages / products e.g. better design / marketing that you think helped over others?
  • Are there any sales targets that you want to set or update and improve upon? 
  • How many people visited your site more than usual? 
  • Were your SEO keywords effective, if not can they be improved upon?
  • How many users are visiting from mobile devices, is your site as mobile friendly as it can be?

Once the sales period is over for this year and you are armed with all the analytics data from this year and the previous year(s)... then you can compare and see if you have hit or missed any sales targets that you may have set. You’ll hopefully also be able to see what worked well (or not so well) in your SEO and marketing to note areas of improvement for subsequent sales you may have in the future.

Discounts

Arguably one of the most important points of what you should consider are the discounts you will be offering on the site. If you don’t usually offer discounts or use discount codes on the site, then it would be wise to test any new discount logic or discount codes on a test environment before they go live. If there are any unexpected issues with the discounts not working correctly or not behaving in the way that you’d imagine, then you (or your developer!) have a chance to fix these things before you actually put them live. 

If you have a Drupal site with e-commerce and use discounts, then you will most likely be using the commerce_discount module to manage discounts on the site. This module generally works well at a basic level, although sometimes once you try and add in some more complex discounting logic, things can sometimes stop working properly. If you are having any commerce_discount related issues on your site and need them solved, or need some bespoke development done to handle your more complex discounting logic, then get in contact with us.

Orders

A final point that you may not have thought of is in regards to the (hopeful) extra increase in orders going through the site in the sales period. Is your stock management up to date and adequate to handle the extra increase in sales? Overselling any products and having to cancel orders is not good for the customer experience and is likely to lead to negative reviews, potentially damaging your reputation. If your site is using Drupal commerce then the commerce_stock contrib module is your best friend here. Amongst other things it allows you to have "add to cart" validation that can prevent users from adding products that are out of stock, and also disable the "add to cart" button when the stock level reaches zero.

Similarly you should think about how you’ll meet your usual shipping estimates that you have displayed on your website. If you anticipate that you won’t be able to keep to your usual timings then you should display a message in a prominent place on the site that the order processing and shipping may take longer than usual during the sales period.

The (Slightly) Technical Bit - Site load, speed, optimisation.

Well before the sales period even begins you should ensure that your site is running at an optimal speed. Google’s PageSpeed Insights can be used to benchmark your site and give you a detailed analysis of page load times and where you can improve by optimising your assets, code and more. The quicker your pages load, the more likely it is that people stick around on your site and don’t get frustrated trying to access what they want - and this is all the more important when your site might get a big influx of visitors during the sales period.

Following on from the Analytics section above; using any data you have for site visitor numbers from the previous year should give you a good indication of the number of visitors that you would hopefully expect to get this year. If the larger-than-usual amount of people visiting your site caused any server timeout or other connection issues, then you should ensure to better prepare the server and site for the influx this year.

I won’t go into too much detail here as benchmarking and optimising a site is a whole article in itself! but a few key points and quick wins to help your site are:

  • Ensuring you have an appropriate caching setup on your site. On a Drupal site as a bare minimum this would include enabling CSS & Javascript aggregation and ensuring Drupal page caching is enabled. The use of a reverse proxy such as Varnish is highly recommended as well.
  • Optimising images the site is serving up with a third party image optimising service such as Kraken. This will shrink file sizes without any noticeable decline in quality.
  • Using a CDN such as Amazon CloudFront to serve up your assets.
  • Minifying Javascript. On a Drupal site this is easy with the use of a module such as Minify JS.
  • Reducing the number of DOM elements to improve page load time.

The PageSpeed Insights tool mentioned above can act as a useful tool to test the optimisations you have done so that you know when you are actually making a (positive!) difference. Hopefully your developer has already done most of the above but if not, make sure they do in good time before the sales period begins.

And finally, if you haven't already, it's a good idea to let your web team know that you are planning a Black Friday sale. This will ensure they're not surprised when the server traffic spikes and they can make recommendations (if needed) to handle the increased traffic. The last thing you want is your site falling down and costing you sales!

Categories: Drupal

Dave Hall Consulting: Buying an Apple Watch for 7USD

29 October 2019 - 8:13am

For DrupalCon Amsterdam, Srijan ran a competition with the prize being an Apple Watch 5. It was a fun idea. Try to get a screenshot of an animated GIF slot machine showing 3 matching logos and tweet it.

I entered the competition.

The competition had a flaw. The winner was selected based on likes.

After a week I realised that I wasn’t going to win. Others were able to garner more likes than I could. Then my hacker mindset kicked in.

I thought I’d find how much 100 likes would cost. A quick search revealed likes costs pennies a piece. At this point I decided that instead of buying an easy win, I’d buy a ridiculous number of likes. 500 likes only cost 7USD. Having a blog post about gaming the system was a good enough prize for me.

I was unsure how things would go. I was supposed to get my 500 likes across 10 days. For the first 12 hours I got nothing. I thought I’d lost my money on a scam. Then the trickle of likes started. Every hour I’d get a 2-3 likes, mostly from Eastern Europe. Every so often I’d get a retweet or a bonus like on a follow up comment. All up I got over 600 fake likes. Great value for money.

Today Sirjan awarded me the watch. I waited until after they’ve finished taking photos before coming clean. Pics or it didn’t happen and all that. The insisted that I still won the competition without the bought likes.

Think very carefully before launching a competition that involves social media engagement. There’s a whole fake engagement economy.

Categories: Drupal

1xINTERNET blog: 1xINTERNET wins two Splash Awards

29 October 2019 - 8:05am
1xINTERNET wins two Splash Awards hadda Tue, 10/29/2019 - 16:05

The International Splash Awards 2019 took place last night in Amsterdam. The event was  well organised and team 1xINTERNET had a great evening. It was nice to meet our colleagues from all over Europe and see the great selection of work being created with Drupal. Over 75 projects from 13 countries were submitted so the jury had a big task choosing the nominees. 

Categories: Drupal

Nuvole: DrupalCon CMI 2 Session slides

29 October 2019 - 6:42am
The session was super packed, so not everybody could attend the session.

Yesterday I presented the updates for the Configuration Management Initiative 2 at DrupalCon Amsterdam.

The main takeaways are:

You can find more explanation in our previous blog post

There were 180 people in the small room and many who wanted to join the session simply couldn't fit any more.

Attached are the slides of the session and I will update this post once the recording is available.

There is still a lot of work to be done for CMI 2, join us on the contribution day at DrupalCon.

Tags: DrupalConDrupal 8Drupal PlanetAttachments:  CMI 2 Updates Drupalcon Amsterdam update
Categories: Drupal

Amazee Labs: DrupalCon Amsterdam Day 1 Recap

29 October 2019 - 4:08am
Hello from Amsterdam! Yes, the Amazees are on the road again, and it is time for Drupalcon Amsterdam 2019!
Categories: Drupal

Chapter Three: Language switcher for a multilingual Drupal 8 site

28 October 2019 - 10:25am

Recently one of our clients asked us to come up with a better language detection and redirection solution for their multilingual Drupal 8 site. Most out of the box solutions do not provide great user experience and IP based detection do not always work as expected. Browser based redirection also not an ideal option since at some point a visitor might want to manually choose wich language they want to see.

Having this issue in hand I started looking into possible solutions, I looked at number of multilingual Drupal and non-Drupal sites and couldn't find anything that would work for our client. I thought what if we ask a visitor what to do by showing them a box with browser detected langauge. This is just as Chrome's transaltion prompt that asks you if you'd like to translate the site. The prompt that is very simple and not as annyoing as some auto redirect solutions.

Categories: Drupal

Hook 42: Drupal Core Initiative Meetings Recap - October 21 - 25, 2019

28 October 2019 - 10:21am
Drupal Core Initiative Meetings Recap - October 21 - 25, 2019 Lindsey Gemmill Mon, 10/28/2019 - 17:21
Categories: Drupal

Acro Media: Drupal Commerce Checkout: An Example of Being Headless Ready

28 October 2019 - 7:45am

Drupal Commerce 2, like Drupal 8, was a big change from previous versions. The code base is much different and it’s quite a learning curve when moving from Drupal 7 to the more complex approaches in Drupal 8. However, this is good. The new versions are modern and all around better. I’ve had a number of revelations while working with Drupal Commerce 2 and Drupal 8 that made me smile. In this post, I’ll explore one revelation I had while working with Drupal Commerce 2’s checkout handling and how it’s forward-thinking development has paved the way (and encourages all new checkout panes to follow suit) for headless ecommerce using Drupal.

Drupal Commerce 2 checkout is not a form… say what!?

Generally, when you think of checkout, you think of it as a sequence of events and one big final submission. This is further driven home by the idea that you can, and should, be able to go back and edit your checkout choices before the final submission. In Drupal Commerce 2, going back and forth between checkout steps is supported, but there is no final submission handler that saves everything.

Wait, what? That’s right, there’s no need to save all the data on the checkout form once checkout is completed. You see, all checkout panes (a step in the checkout process) have a submission event that gets called when it's time to save the data. So if you’re going to save data in a checkout pane, you gotta do it after your customer has moved forward in the checkout process but before your customer is ready to commit to the checkout pane’s final value state (complete checkout). Submission is perceived to be at the end of checkout, not before.

On the surface that might make sense, in fact, this workflow being so obvious might even blind you to the implications. Since each pane is basically handling its own submission workflow, you can’t allow your form state to persist choices and not make a decision until the end. You’re probably, like me, thinking that saving data and reacting to data is the same thing. But this assumption is old, out of date, incompatible with best practices, and in checkout for Commerce 2, causes design problems.

Explanation through an example: A checkout newsletter subscription

A common want is to include a little checkbox underneath a contact information email field where new or returning customers can opt-in to a newsletter. Sure, that’s no big deal, right?

Our customer expects that things in checkout aren’t real until they complete checkout (i.e. nothing is saved until they actually place the order). On the other hand, Drupal Commerce 2 expects all panes to save their data after a “continue to next-step” button gets clicked, submitting that pane.

Here’s how the checkbox would be made using our current form submission logic:

  1. Create a CheckoutPaneBase object that collects data through a checkbox
  2. On the pane form submission, subscribe the customer to your newsletter

Do you see the problem? If we react on pane submission (our only choice in our current way of thinking), we’ll subscribe the customer to our newsletter well before they are done with checkout. In fact, each time they see the first page of checkout and proceed to the second, they will be subscribed to our newsletter. Not only is this not what the customer would expect, but subscribing them multiple times is totally unnecessary and would likely cause problems. Subscribing the customer on pane form submission is the wrong approach.

This is where things get really trippy – and awesome and beautiful and wonderfully clever and great. You see, Drupal 8, which Commerce 2 is built around, has been designed to not require forms, form states and value persistence in order to trigger important actions. This is a whole new way of thinking and maybe the most important to our discussion. Previous to this, most Drupal 7 developers would have assumed that all forms require user-facing interfaces that would be submitted, but that is a pretty brutal assumption and has plagued a lot of Drupal installations over the years. If that was still the case, then form submissions are something that headless implementations of Drupal would never really trigger. There must be a better way.

Headless decoupling breeds better code using events

If checkout was a single form with a final submission handler that submitted payment, subscribed users to newsletters, saved addresses to profiles, and did all the things you would expect all at once, then all the code that manages these things would have to react to a single form submission.

However, if we use Drupal's built in event system instead, we suddenly have much greater degree of control. But before we get into that, let’s first take a quick look at what events are and where they come from.

Drupal 8 made a big shift towards being object oriented by adopting Symfony within its framework. Symphony provides a number of components useful in modern object oriented programming, one of which is events. Events in Drupal 8 give developers a new way to extend and modify how interactions with core and other modules work. If you’re already familiar with Drupal 7, events are basically meant to replace hooks. Drupal 8’s event system documentation helps us to understand the basic concepts and components making up the event system.

  • Event Subscribers - Sometimes called "Listeners", are callable methods or functions that react to an event being propagated throughout the Event Registry.
  • Event Registry - Where event subscribers are collected and sorted.
  • Event Dispatcher - The mechanism in which an event is triggered, or "dispatched" throughout the system.
  • Event Context - Many events require specific set of data that is important to the subscribers to an event. This can be as simple as a value passed to the Event Subscriber, or as complex as a specially created class that contains the relevant data.

Source: Drupal.org documentation, Subscribe to and dispatch events (link)

Getting back to our checkout scenario, if you use the events system and your checkout completion is simply a state transition from Draft to Completed, then other modules could subscribe to that transition event, take the saved data from the different pane submissions, and do whatever they want with it.

Do you see the beauty here? By forcing checkout panes to submit before the final submission, we (module builders, implementers, etc.) have a baked-in reason to store checkout decisions on the order so that order events can access them separately, giving us the ability to create orders with checkout decisions saved that can skip checkout completely and still have the events trigger the needed actions. This is quite powerful and opens up a whole new world of possibilities. Of course, since this is an implicit design choice, it’s up to the author of the module or code to see the reasons and embrace them.

Entity + event-based instead of form-based

So to complete our newsletter subscription pane example using our new knowledge of events instead of form submissions, here’s what we would do:

  1. Create a CheckoutPaneBase object that collects data through a checkbox and saves it to the order (either through a field value or the ->setData typed data interface.
  2. Save this value on pane submission but don’t act on the value (i.e. don’t subscribe the user)
  3. Create an event subscriber and use the transition event you want to use as a trigger. Completing checkout makes the most sense.
  4. Treat the order value as a "request subscription to newsletter." Then, when the event fires and the event subscriber runs, it can look for the saved value and set the user to subscribed or not after it returns. This allows us to handle someone going through an event twice for some reason, like for multiple orders, etc.

Your customer gets subscribed to your newsletter when they, and you, expect them to. No forms needed. ISN’T THAT AMAZING!

Thanks to the many authors of Drupal Commerce 2, including Bojan Živanović and Matt Glaman, that implemented this design choice years ago, many modules and implementations are simply technically better and likely ready for headless implementations now that headless is all-the-rage.

And best of all, from a developer standpoint, this also means the bulk of your most critical automated tests that interact with your code doesn’t have to access the checkout form. They simply have to have orders that get transitioned. This makes writing tests, which equates to better code, simpler.

Your Drupal Commerce experts

As a full service Drupal agency, Acro Media has significant expertise in digital commerce architecture, ecommerce consulting and design, customer experience, Drupal development and hosting architecture. We would love the opportunity to work with you.

Categories: Drupal

Spinning Code: Salesforce Queries and Proxies in Drupal 8

28 October 2019 - 7:04am

The Drupal 8 version of the Salesforce Suite provides a powerful combination of features that are ready to use and mechanisms for adding custom add-ons you may need.  What it does not yet have is lots of good public documentation to explain all those features.

A recent support issue in the Salesforce issue queue asked for example code for writing queries. While I’ll address some of that here, there is ongoing work to replace the query interface to be more like Drupal core’s.  Hopefully once that’s complete I’ll get a chance to revise this article, but be warned some of those details may be a little out of date depending on when you read this post.

To run a simple query for all closed Opportunities related to an Account that closed after a specific date you can do something like the following:

$query = new SelectQuery('Opportunity'); $query->fields = [ 'Id', 'Name', 'Description', 'CloseDate', 'Amount', 'StageName', ]; $query->addCondition('AccountId', $desiredAccountId, '='); $query->conditions[] = [ "(StageName", '=', "'Closed Won'", 'OR', 'StageName', '=', "'Closed Lost')", ]; $query->conditions[] = ['CloseDate', '>=', $someSelectedDate]; $sfResponse = \Drupal::service('salesforce.client')->query($query);

The class would need to include a use statement for to get Drupal\salesforce\SelectQuery; And ideally you would embed this in a service that would allow you to inject the Salesforce Client service more correctly, but hopefully you get the idea.

The main oddity in the code above is the handling of query conditions (which is part of what lead to the forthcoming interface changes). You can use the addCondition() method and provide a field name, value, and comparison as lie 10 does. Or you can add an array of terms directly to the conditions array that will be imploded together. Each element of the conditions array will be ANDed together, so OR conditions need to be inserted the way lines 11-14 handle it.

Running a query in the abstract is pretty straight forward, the main question really is what are you going to do with the data that comes from the query. The suite’s main mapping features provide most of what you need for just pulling down data to store in entities, and you should use the entity mapping features until you have a really good reason not to, so the need for direct querying is somewhat limited.

But there are use cases that make sense to run queries directly. Largely these are around pulling down data that needs to be updated in near-real time (so perhaps that list of opportunities would be ones related to my user that were closed in the last week instead of some random account).

I’ve written about using Drupal 8 to proxy remote APIs before. If you look at the sample code you’ll see the comment that says: // Do some useful stuff to build an array of data.  Now is your chance to do something useful:

<?php namespace Drupal\example\Controller; use Symfony\Component\HttpFoundation\Request; use Drupal\Core\Controller\ControllerBase; use Drupal\Core\Cache\CacheableJsonResponse; use Drupal\Core\Cache\CacheableMetadata; class ExampleController extends ControllerBase { public function getJson(Request $request) { // Securely load the AccountId you want, and set date range. $data = []; $query = new SelectQuery('Opportunity'); $query->fields = [ 'Id', 'Name', 'Description', 'CloseDate', 'Amount', 'StageName', ]; $query->addCondition('AccountId', $desiredAccountId, '='); $query->conditions[] = [ "(StageName", '=', "'Closed Won'", 'OR', 'StageName', '=', "'Closed Lost')", ]; $query->conditions[] = ['CloseDate', '>=', $someSelectedDate]; $sfResponse = \Drupal::service('salesforce.client')->query($query); if (!empty($sfResponse)) { $data['opp_count'] = $sfResponse->size(); $data['opps'] = []; if ($data['opp_count']) { foreach ($sfResponse->records() as $opp) { $data['opps'][] = $opp->fields(); } } } else { $data['opp_count'] = 0; } // Add Cache settings for Max-age and URL context. // You can use any of Drupal's contexts, tags, and time. $data['#cache'] = [ 'max-age' => 600, 'contexts' => [ 'url', 'user', ], ]; $response = new CacheableJsonResponse($data); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($data)); return $response; } } Cautions and Considerations

I left out a couple details above on purpose. Most notable I am not showing ways to get the needed SFID for filtering because you need to apply a little security checking on your route/controller/service. And those checks are probably specific to your project. If you are not careful you could let anonymous users just explore your whole database. It is an easy mistake to make if you do something like use a Salesforce ID as a URL parameter of some kind. You will want to make sure you know who is running queries and that they are allowed to see the data you are about to present. This is on you as the developer, not on Drupal or Salesforce, and I’m not risking giving you a bad example to follow.

Another detail to note is that I used the cache response for a reason.  Without caching every request would go through to Salesforce. This is both slower than getting cached results (their REST API is not super fast and you are proxying through Drupal along the way), and leaves you open to a simple DOS where someone makes a bunch of calls and sucks up all your API requests for the day. Think carefully before limiting or removing those cache options (and make sure your cache actually works in production).  Setting a context of both URL and User can help ensure the right people see the right data at the right time.

Categories: Drupal

Agiledrop.com Blog: Why creative agencies should partner with a development agency

28 October 2019 - 2:41am

In this post, we’ll dive into the main benefits for creative and design agencies of partnering with a skilled and reliable development agency as opposed to taking care of all their dev work in-house. 

READ MORE
Categories: Drupal

Symphony Blog: Speed up your Drupal 8 sites with image lazy loading

28 October 2019 - 1:43am

Recently, we involved in a local project with Ecoparker.com. It is a directory of restaurants, cafes, entertainments, services, real properties ... in Ecopark Hanoi, Vietnam. This site is based on our best selling directory theme BizReview.

On this site, there is a page which lists all kindergartens around the area. It has 20 listings and will continue to grow. It is a very typical page built with Drupal views.

read more

Categories: Drupal

Centarro: Launching Centarro Support at DrupalCon Amsterdam

27 October 2019 - 8:09pm

We shared our thoughts about eliminating barriers to Drupal Commerce growth earlier this year in response to a blog series by Drupal project lead Dries Buytaert. We laid out a roadmap that included closing the feature gap with our competitors and finding ways to better reach and support end users of our software. We've addressed the feature gap through numerous module releases since then, and we're launching the next phase, Centarro Support for Drupal Commerce developers and teams, here at DrupalCon Amsterdam.

Categories: Drupal

Love Huria: Understanding Time complexity - Big O Notations

25 October 2019 - 5:00pm

Lately, I have gotten an interest in algorithms, the first one I chose to understand deeply is how sorting algorithms work and their time complexity. However, this post is not to explain sorting algorithms, instead, we will take a step back and understand time complexity [Big O Notation] in the simplest way possible.

Before we go any further, let’s understand what is an Algorithm:

An Algorithm is a step by step instruction, telling a program to execute in a certain way to solve a particular problem. And it’s pretty obvious, that when we run a program in any language,...

Categories: Drupal

Drupal Association blog: Meet us at Booth 3 at DrupalCon Amsterdam

25 October 2019 - 12:05pm

Our staff will be at Booth 3 ready to talk with you about the Drupal community, how you can get more involved as a contributor, and to hear about your needs. 

Make sure you....

✓ pick up some Drupal stickers

✓ show your support by signing up for membership or partner programs

Session highlights
  • Tuesday at 16h15, in G107, attend the Drupal Association Townhall with our Executive Director Heather Rocker (hrocker), CTO Tim Lehnen (hestenet), and our Board Chair Adam Goodman (adamgoodman). We'll be taking questions and diving into topics important to the community.
  • Wednesday at 11h30, in G107, we're holding our public board meeting. All are welcome to attend!
     
  • Also on Wednesday, if you're curious about what the Drupal.org Engineering Team is working on, come to the Drupal.org Infrastructure Update session in G103 at 17h15.

See you soon!

Categories: Drupal

Pages