Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 22 hours 26 min ago

DataSmith: Layman's Guide to Image Lazy Loading in Lightning

16 June 2019 - 8:08am
Layman's Guide to Image Lazy Loading in Lightning Barrett Sun, 06/16/2019 - 10:08
Categories: Drupal

Evolving Web: Final Thoughts on Drupal North 2019

14 June 2019 - 8:57pm

This year's edition of Drupal North has come to an end and what a great experience it has been. We got to connect with so many people in the community, share our experiences, and learn from others. Here are some of the highlights and key takeaways from the last 3 days:

Accessibility really is on the top of everyone's minds

Throughout the conference, there were so many conversations about accessibility -- from best practices, to the future of accessibility tools and websites. With the passing of Bill C-81, government and higher education institutions in Canada will need to make this a top priority as their websites will need to be compliant with Level AA WCAG standards. Those using Drupal will already be 1 step ahead as it is more accessibility-friendly than other content management platforms.

Just like Drupal, the Drupal North community values openness

The people that make up the Drupal community are always willing to share tips, tools, information, and what processes work best for them. Some of the attendees we spoke with specifically said they were excited to implement some of the techniques and tools that were discussed at Drupal North. This is a community of learners and innovators, always striving to move the web forward.

We are a group of problem-solvers

Some of the most valuable conversations this week were held when attendees were encouraged to share problems they've been encountering and ask the community for ideas and suggestions. The roundtable breakouts at the summits were great for brainstorming and connecting with others having similar issues, and lots of great feedback and input was shared during the Interactive UX Workshop that Alex Dergachev and I hosted.

We hope everyone had a great time at Drupal North 2019 and we are very proud to have been a Gold Sponsor for this year's conference. Keep following along with us on LinkedIn and Twitter, stay up-to-date on future trainings and UX Meetups, and reach out to us if you want to talk Drupal.

+ more awesome articles by Evolving Web
Categories: Drupal

Thinkbean: Looking good Drupal, looking good!

14 June 2019 - 8:36am
Once upon a time the authoring experience (AX) in Drupal left much to be desired. Content editors using Drupal 6 or Drupal 7 glanced longingly across at WordPress editing screens wistfully hoping for some of that ease of use. The flexibility of Drupal from a content management standpoint was never in doubt but we all just wished that when in the edit screen it looked so much better and behaved in a manner we were accustomed to when using other modern digital products and services. Well finally the wait is over! Welcome to the new Drupal authoring experience! Let's focus on three main areas of the Drupal authoring experience which have made Drupal 8 a game changer for digital marketing professionals. 1. Gutenberg Editor It's nice...it is really nice! Below is a screenshot of the new Gutenberg editor experience available in Drupal 8.
Categories: Drupal

Mediacurrent: Our Clients’ Top 2019 Digital Marketing Challenges

14 June 2019 - 6:54am

What’s the one big challenge that marketers and CMO’s we partner with  are facing this year? It’s really tough to put a finger on just one. Proving impact on revenue, marketing team staffing, personalization, and marketing-IT alignment are among the hurdles voiced in discussions that Mediacurrent’s sales team are having with prospects and clients. We are finding CMO’s are pressed more than ever to show marketing’s value while the complexities and opportunities sprouting within digital continue to evolve. Let’s dive into each challenge and uncover what makes these hurdles difficult to jump — and the tools or approach that can help marketers overcome them.

Proving Impact on Revenue

Probably not surprising that last year Gartner surveyed where CMOs were spending marketing budgets. They found marketing budgets shrunk slightly year over year since 2016 while a higher percentage of budgets are being allocated to digital. The pressure is on for marketers to prove how specific marketing campaigns and investments directly contribute to an organization’s revenue. Owners and shareholders want more specificity in understanding how much budget to allocate to higher revenue generating activities. Furthermore, marketers need to react faster to fluctuating market conditions that impact customer experience.

How can you attribute revenue to specific marketing activities and demonstrate ROI so you can invest and optimize in the right activities? There are a number of SaaS tools available and most implement a specific approach to measure marketing attribution and achieve granular ROI tracking. 

  • Motomo - offers a GPL-licensed on-premise web analytics stack.
  • Bizible - analytics and multi-touch revenue attribution.
  • Terminus / Brightfunnel - product suite that offers account-based marketing analytics and sales insights.
  • Conversion Logic - cross-channel attribution with AI-powered insights and budget forecast simulations.
  • Allocadia - a marketing performance management platform that offers revenue attribution and insights into marketing budget allocation.
  • Full Circle Insights - product stack that tracks marketing and sales attribution, built as a native Salesforce App.
  • Google Attribution - formerly called Adometry, it’s now part of the Google Marketing Platform.
  • Salesforce CRM - ROI tracking can be enabled with additional campaign configuration.
  • Domo Digital 360 - full suite of analytics, funnel, and ROI tracking.
  • VisualIQ - strategic measurement, multi-touch attribution, audience analysis, and predictive insights.
  • Oracle Marketing Cloud - integrated suite of tools that include analytics, marketing automation, content/social marketing, and data management.

Because each tool specializes in a specific aspect of ROI tracking, you will need to do some research to understand which tool best fits your organization. Most of the tools listed above implement some form of attribution tracking that will help achieve more robust ROI calculations. Our Director of Marketing Adam Kirby gives a helpful overview of  how marketing attribution works, in his MarTech West slide deck. Organizations we speak with often need help from consultants and agencies to understand how to optimally configure their martech stack with ROI tracking tools. This need coincidentally brings us to the next challenge marketer’s are facing...

Staffing Teams - The Right Blend

Organizations are becoming more careful to find the proper balance between internal team staffing and engaging help from an outside agency. In the early 2010’s, there was a movement within Fortune 2000 companies to bring more expertise in-house. As martech complexity evolved into the latter part of this decade, organizations are realizing that exposure to new technologies and approaches is limited with their in-house teams. By engaging with a wide spectrum of industries, clients, and projects, agencies provide a broad view into the martech landscape that in-house teams don’t have. What’s the right blend? It depends on the vertical. Organizations with one large website typically outsource at least half of their digital marketing. Higher Ed and Government have longer procurement cycles and, consequently, need at least 75% of their overall marketing team to be full-time in-house.

Not only is outside help needed by in-house teams to stay informed, budget scrutiny is forcing CMO’s to seek off-shore development help. However, they are finding off-shore falters when technology projects aren’t being led by one or more on-shore architects who maintain a project’s integrity between on-shore stakeholders and off-shore teams. These technical liaisons are critical to off-shore development success. We see too many organizations assume if off-shore developers demonstrate technical competency, they should be fully capable of leading an implementation. Yet, those organizations fail to consider the strength of influence local culture has on communication dynamics and the perception of requirements by off-shore teams.

Personalization

Another challenge marketers are targeting is how personalization can impact KPIs and produce a higher ROI percentage compared to other digital marketing efforts. In 2017, the concept of personalization was buzzing while marketers were trying to understand what it takes from a content and labor effect to implement. After GDPR went into effect a little over a year ago, personalization efforts have to take into account how GDPR laws impact customer data acquisition and retention, making the implementation of personalization trickier and more complex with respect to data analysis and the ability to capitalize on personalization opportunities. Tools like Acquia Lift, open source marketing automation platform Mautic (recently acquired by Acquia), Triblio, and Optimizely Web Personalization offer slightly different perspectives on personalization. 

When evaluating if you’re ready for personalization, here are eight considerations that will dictate success when carefully planned or potential failure if not addressed:

  1. Do you have enough content that’s written for each persona your personalization effort needs to target?
  2. Do you have content creators who can continually create new and evergreen content?
  3. Do you have KPIs defined to track the performance of your personalization efforts?
  4. Is your martech stack compatible with personalization technologies that fit your business model?
  5. Do accurate, fresh data metrics exist in usable forms? Is data structured uniformly and exclusive of redundancies that might skew its meaning?
  6. How do data privacy laws impact the efficacy of a personalization initiative? Can enough of the right user data legally be captured to supply the initiative?
  7. Are data governance guidelines in place that ensure data integrity stays intact well beyond the implementation phase of a personalization initiative?
  8. Finally, is your department or organization committed to investing time and energy into personalization? It’s a long game and shouldn’t be misinterpreted as an off-the-shelf-set-it-and-forget-it type of content solution.

If you’re starting a personalization strategy from ground zero, Mediacurrent Senior Digital Strategist Danielle Barthelemy wrote a quick guide to creating a content strategy with personalization as the end-goal. Danielle illustrates how a sound personalization strategy positively influences purchase intent, response rate, and acquisition costs. 

Marketing-IT Alignment

In order for digital marketing execution to be as effective and efficient as possible with initiatives like ROI tracking and personalization, it’s imperative for marketing and IT teams to collaborate cohesively.  A frictionless environment is critical for marketers to meet the immediacy of an ever-increasing market speed. In some organizations, these two departments are still maintaining competing interests in relation to policy, security, infrastructure, and budget. Example scenarios include  strict IT policies that can stifle speed-to-market, cowboy marketers all but ignoring technical security when implementing new tools, and executives missing the budgetary discord that echoes when both departments operate in their own silos.

These independent agendas must be meshed together into one for the betterment of the organization. But how? 

  • Learn how to empathize by understanding each other’s goals and challenges across departments. Define a shared list of KPI’s and time-bound each.
  • Schedule weekly touch point meetings between IT and marketing leaders.
  • Conduct a quarterly tools review to understand the “why” behind tools that each department uses.
  • Demonstrate discipline-specific concepts that require collaboration from the other department. For instance, show IT how marketing attribution works and what’s required of them to make it successful. Or, show marketing what a normalized database is and how it will help marketing be successful by reducing duplicate data.
Marketing ROI: An Ongoing Challenge

Overall, the challenges CMO’s are asking us about as we move into the latter half of 2019 are heavily rooted in accurately tracking ROI and putting tools in place to boost it. While marketers have been challenged with proving ROI for years, digital has evolved to a point where tools and systems exist that embolden marketers to aggressively pursue understanding where their money is best spent. For most organizations, there are still talent hurdles to overcome and knowledge gaps to fill to properly implement martech and systems that accurately track ROI. 

How about you — what challenges are your marketing department working to solve this year? Have you found the right in-house to agency team blend? Have you had success with ROI tracking and personalization?

Categories: Drupal

OpenSense Labs: Is Decoupled Drupal the Right Choice for You?

13 June 2019 - 9:59pm
Is Decoupled Drupal the Right Choice for You? Jayati Fri, 06/14/2019 - 10:29

A lot of buzz around “Decoupled Drupal” is taking place and it has quickly become ubiquitous in the industry. Drupal has won hearts by embracing the newest of technology and presenting the best of possibilities. The full separation of the structure from the content has aided the content management systems with appropriate means to accelerate the pace of innovation. 

In this blog, we will address some loaded questions of what, why and when of Decoupled Drupal for you. 

Decoupled Drupal Is For You

Rendering a separate framework for front-end and back-end content management experience, Decoupled Drupal provides for a content presentation that is completely devoid of the content management. It is also known as ‘Headless Drupal’, where the head refers to the front-end rendering or the presentation of the content and the ‘body’ attributes to the backend storage. 

Addressing the 3 Ws: Why, What, When 

In this section, we will take one head at a time and examine the core functionalities of Decoupled (Headless) Drupal. 

Why Decoupled?

Being a flexible framework for developing websites, web/native apps and similar digital products, Decoupled Drupal allows for designers and front-end developers to build without limitations. As an organisation you can leverage a decoupled approach for progressive web apps, and native apps. Decoupled Drupal has created a noise in the community with its divide and conquer development strategy.

What’s your Intention?

Your intentions always determine the outcome, i.e., how your product will be built with the Decoupled Drupal. For the developers working on it, here are a few scenarios and their outcomes: 

  • In case of standalone websites/applications, decoupled Drupal might not be a wise choice. 
  • For multiple web applications and websites, decoupled Drupal can be leveraged in two different ways. 
  • When building non-web native apps, you can employ decoupled Drupal to attain a content repository without its own public-facing front end.
Source: Dri.es

Once the intentions are clear, the next step is to see if it can be executed given a proper apparatus. Here are a few questions that should influence your decision to choose decoupled Drupal: 

  • Is it right for your project and your team?
  • Do you have a strong grasp on your data needs?
  • Evaluate if your current hosting provider can support this architecture
  • Are you prepared to handle the complexity of serving content to multiple clients?
  • Do the URL alias values have a unique identifier that makes API requests easy?
  • Can your metadata logic power meta tags, JSON-LD, analytics to be generated with standardised rules?
  • Where are menus created, ordered, and managed? 
  • Do you have an architecture that supports combining multiple redirect rules into a single redirect?
When to Decouple

By now we have established enough facts that Decoupled Drupal is a package full of advantages. It’s time to delve deeper and seek the accuracy of circumstances in which it can be put into effect: 

Decoupled Drupal allows for designers and front-end developers to build without limitations Resources 

Progressively decoupling the Drupal requires a separate development of the backend and front-end and thus, separate resources are a mandate. Two individually capable teams that can collaborate and support makes for a successful decoupled Drupal. 

Multiple Channels

 The faculty of publishing content and data across platforms and products can affect the way you become headless.

Applicable Content

 Decouple is a great fit if you already have an interactive data. Visualisations, animations, and complex user flows pushes for frameworks like Ember, React, Vue JS or Angular.

Drupal Interface

Sometimes, a rich interface and built-in features can hinder the work. Even Drupal’s flexible content model to store content requires a different interface for adding and managing that content in some cases. 

When Not to Decouple

Inversely, it is equally important to know what situations might not be healthy for a decoupled Drupal to thrive. Gauge these possibilities to rule out situations/project:

  • Drupal has the advantage to leverage a huge pile of free modules from the open source community. But with the decoupled Drupal, the ability to easily “turn-on” the front-end functionality goes out of the window. The separate content management system eliminates this likelihood of managing your website front-end directly. 
  • Drupal’s front-end proficiency should align with your front-end requirement. Absence of a systematic match can land your decoupled dream in doubts.  
Conclusion

There’s no confusion about the abilities of Decoupled Drupal. It’s your business requirements that should fit in like a puzzle with the headless architecture. With necessary technical leadership skills and expertise in this web infrastructure, you can sail your decoupling aspirations to the other end. 

We’d love to hear your feedback on our social media platforms: Twitter, Facebook and LinkedIn

And do not forget to share more ideas at hello@opensenselabs.com

blog banner blog image Progressively decoupled Decoupled Drupal Progressively decoupled Drupal Decoupled CMS User Interface Drupal scalability Blog Type Articles Is it a good read ? On
Categories: Drupal

Evolving Web: What We Learned at Drupal North Day 2

13 June 2019 - 8:41pm

Another successful day at Drupal North is now complete! This day was packed with sessions from all kinds of speakers, including our very own Jigar Mehta and Robert Ngo. Some great discussions were had amongst the Drupal community which was out in full force. Here are some of the ideas that we saw repeated throughout the day:

Content must be modular

Making your content modular allows you to easily plug it into any new type of channel. There's no need for you to start from scratch just because you're creating something for a different platform or user base. And, if you keep this content in a centralized hub, all users have access to the most accurate and up-to-date versions.

Plan out where you're going in the initial design phase

Knowing where you're going makes it that much easier to get there. You need to start with solid components so you don't have to go back later on and make constant revisions. A detailed plan allows you to take advantage of UI Patterns that will save you time and headaches in the future.

More and more people actually know about Drupal

Years ago, many within the Drupal community would have to explain to people what Drupal, and even open-source was. This made the task of convincing them to switch to a Drupal site even harder. Now, executives and decision-makers will have often already heard of Drupal and just need to be convinced of what value YOU can bring to them.

Accessibility is key

The web is for everyone and that means your website needs to be accessible for everyone. It's also important to maintain this accessibility; technology is always improving so just because your site was accessible when you launched it 3 years ago, doesn't mean it is today. And when you conduct user tests, try and recruit diverse participants in order to get more inclusive results.

Drupalers love basketball!

To wrap up the day, conference attendees went to the after party to catch game 6 of the NBA Finals -- GO RAPTORS!

Just one more day left of Drupal North and we hope you've been making the most of it! Make sure you're following along with us on LinkedIn and Twitter, and check out the rest of our daily recaps on this blog.

+ more awesome articles by Evolving Web
Categories: Drupal

Kanopi Studios: 5 Things to Consider When Executing a Website Rebuild

13 June 2019 - 12:26pm

You’ve decided it’s time to rebuild your website. Research has been done, conversion rates have been analyzed, the team has selected a rebuild over a focused fix, and you and your team are committed to making this happen. One of the easiest ways of ensuring your success is to remain mindful of a few key things as you work your way through this larger process.

Regarding that term, “mindful:” one of the Kanopi team’s favorite authors is Brené Brown. She writes, “Mindfulness requires that we not “over-identify” with thoughts and feelings so that we are not caught up and swept away by negativity.” For the purposes of your website rebuild, I’d adapt this to be, “Mindfulness requires that we not “over-focus” on what we’ve done before, and rather remain aware of what’s important for our success so that we can focus on where we want to be.”

So, let’s get to it and break down what the top five things we need to be mindful of when executing a rebuild project.

1. YOU are the difference! Be engaged.

Stakeholder engagement can make or break a rebuild. But rebuilds are time-consuming, and you and your stakeholders will likely be pulled in several directions as you try to execute a rebuild while balancing other priorities and projects.

Your availability, open communication, and timely feedback is critical to enable your team to create the web presence your organization needs to reach its goals. Be realistic in what time your team can devote to the project so you can be as fully engaged as possible. Define roles and responsibilities early as well so it’s clear who is handling what.

If you need an assist from an outside agency to keep the project moving quicker, be direct with them about your business needs and wants. Help them to understand your users and audiences. An agency will make every effort to dive deeply into understanding your market, but at the end of the day, you and your team are the experts on what you do. So view any outside agency as a partner who can work with you towards success, and stay engaged with them throughout the process.

2. Define success & track it

We cannot know if we’re successful until we have identified what success will look like. For some sites, it’s simply exposure. For others, it’s a need to meet specific goals. Take the time to define what your organization needs to achieve, and which key metrics will allow us to quantify success.

Not sure where to start? Here are common metrics should you benchmark now as you prepare for the rebuild:

  • Users: note how many users are regularly coming to your site
  • Bounce Rate: record the overall bounce rate. Make note if this is at, above or below your industry’s standard.
  • Average Session Duration: how long are users staying on your page?
  • Sessions by Channel: where are your users coming from? How much organic traffic is coming in?
  • Top Keywords: identify what words are being used in the search engines when users are finding you. Are these surprising?
  • Competitor Keywords: are users who are looking at your competitors using the same keywords?
  • Top Referrers: who is sending traffic to your site? Maybe social media is key, or you’re more focused on industry referrals. Determine where you should be in the market.
  • Conversion Rates: what forms do you need users to fill out? What conversions are critical to your business goals? These can take the form of contact or forms from your CRM tools such as Marketo or Pardot, or even visits to a specific page or video views.   
  • Accessibility: does your site meet national or international compliance standards?

In short, benchmark where you are now, and use this data to help round out that definition of success. Then come back a few months after launch to reevaluate and compare so you can quantify the success to your stakeholders.

3. Get your content strategy in order

The old saying “Content is King” is truer today than ever. Users are more educated. Search engines have become smarter, looking for more than keywords — they look for meaning in phrases to help determine the focus of a given page.

As one of the most effective methods of growing audience engagement, developing your brand presence, and driving sales, content marketing is a mission-critical growth method for most businesses. — Hubspot

This is where most people turn to me and tell me they’ll get their team on it so they can move further along in the content process. But don’t underestimate the time and energy content development/aggregation can take, even if your larger project is hiring a copywriter to augment your team. All too often, when content becomes a late-stage endeavor a few things happen:

  • timelines get pushed out, waiting for content to be approved.
  • changes to the previous UX are often required to account for unrealized navigation or calls to action, causing potential budget overages.
  • content is rushed and not in alignment with the overall vision.

To help this process come together for your team, here are a few action items to start with:

  • Audit your content: take a full inventory of your site’s content to better identify:
    • what to keep
    • what to repurpose
      • for example: the video may look dated, but could your team could write a blog post from that material?
    • what should not be migrated to your new site
      • this can be archived to be referenced at a later date
  • Build a sitemap: determine the hierarchy of the content on the new site.
  • Identify missing content: comparing your audit to your sitemap, what needs to be produced?
  • Track content creation: track who is responsible for writing, editing and approving content — and give them deadlines
  • Start thinking ahead: you may need to start planning future content. Developing an editorial calendar will help keep the process moving. Content typically included in an editorial calendar:
    • blog posts
    • social media posts
    • videos
    • infographics

When preparing for a rebuild, your content strategy has to be one of the first things your team takes on. This approach will save you time, headaches, and likely budget moving forward. 

4. Consider your users’ digital experience

By this stage in the process you should know your target market, their buying habits and why your product or service is of value to them. You likely have personas and other data to help back this up. But in the omnichannel world in which we thrive, there is often more to architecting an effective user journey. Understanding the nuances of the devices, the influence of how a user comes to your site, and the overall adherence to best practices are complex. For example, consider the following:

  • What percentage of users are coming from mobile devices?
    • Are you CTAs and main conversion points easy to access on a small screen?
    • Is the user journey simplified?
  • Are you users coming from social media?
    • Is it your blog driving traffic or more word of mouth?
    • Is it positive or negative attention?
  • Have you produced a user journey map to identify the different pathways to conversion?
    • Is your site currently set up to promote these journeys?
    • Are you utilizing personalization to customize that user journey?

You can learn more about how to use user research to gain insight into audience behavior to help you frame your thoughts about your personas overall user journey to conversion.

5. Think about the future of your site

Websites need to evolve and adapt as the needs of your users change over time, but as you rebuild, are you setting yourself up for more incremental changes moving forward? Keep in mind that most rebuilds are focused on the MLP or “Minimum Lovable Product.” It’s the simplest iteration of your site that will meet your current needs with the intent to continually improve it over time. Regardless of whether you’re focused on an MLP launch due to either time or budget constraints, we need to keep these future goals in mind as we progress.

And then there’s the technology side of this: whether you’re looking ahead to Drupal 8 or 9 or the next major evolution with WordPress, consider those needs now to help ‘future proof’ your new site. The web changes too quickly to risk your site being stale when it’s still brand new. Talk this through from the start with your team.

These steps will set you up for success.

Your site speaks to who you are as an organization to your target market. Whether you’re a non-profit, higher education or a corporate entity, being mindful now will set your team’s rebuild up for success. And if you need help with your rebuild, contact us. We’d love to partner with you and help you recognize that success.

The post 5 Things to Consider When Executing a Website Rebuild appeared first on Kanopi Studios.

Categories: Drupal

Cheeky Monkey Media: Drupal 8 and 9 Features That Have Us Going Bananas

13 June 2019 - 11:46am
Drupal 8 and 9 Features That Have Us Going Bananas cody Thu, 06/13/2019 - 18:46

Just when you think Drupal couldn’t get any dumber, it goes and adds some great new features….. And TOTALLY redeems itself!

via GIPHY

Released back in November of 2015, Drupal 8 has been slowly but steadily upping its game.

In case you’ve been lost in a jungle for the past couple of years, or maybe you just don’t keep up with that kind of thing, we’ve got you covered.

Here are just some of the things Drupal 8 and soon to be Drupal 9 have us jumping around like crazy apes about.

Categories: Drupal

OSTraining: How to Build User Profiles With Fields in Drupal 8

13 June 2019 - 8:49am

By default, a Drupal 8 user account collects only very basic information about the user. 

And, most of that information is not visible to visitors or other users on the site.

Fortunately, Drupal makes it easy to modify and expand this profile so that people can add useful information about themselves such as their real name (versus a username), address, employer, URLs, biography, and more.

Categories: Drupal

wishdesk.com: Image hover effect in Drupal 8 with Imagepin button module

13 June 2019 - 5:08am
Drupal 8 has easy content creation as a priority, and there are also many useful modules for creating image hover effect. Let’s take a look at a simple but nice one — the Imagepin button module.
Categories: Drupal

Srijan Technologies: Demystifying the Decoupled Architecture

13 June 2019 - 2:18am

In a world where there is no limit to devices to access information, you must ensure your data is always available on the go! The pace of innovation in content management is accelerating along with the number of channels to support the web content.

Categories: Drupal

Web Omelette: Dynamic migrations using "templates" in Drupal 8

13 June 2019 - 12:24am

This article is a companion to the presentation I held at the Drupal Dev Days 2019 conference in Cluj Napoca.

In this article we are going to explore some of the powers of the Drupal 8 migration system, namely the migration “templates” that allow us to build dynamic migrations. And by templates I don’t mean Twig templates but plugin definitions that get enhanced by a deriver to make individual migrations for each of the things that we need in the application. For example, as we will explore, each language.

The term “template” I inherit from the early days of Drupal 8 when migrations were config entities and core had migration (config) templates in place for Drupal to Drupal migrations. But I like to use this term to represent also the deriver-based migrations because it kinda makes sense. But it’s a personal choice so feel free to ignore it if you don’t agree.

Before going into the details of how the dynamic migrations works, let’s cover a few of the more basic things about migrations in Drupal 8.

What is a migration?

The very first thing we should talk about is what actually is a migration. The simple answer to this question is: a plugin. Each migration is a YAML-based plugin that actually brings together all the other plugins the migration system needs to run an actual logical migration. And if you don’t know what a plugin is, they are swappable bits of functionality that are meant to perform a similar task, depending on their type. They are all over core and by now there are plenty of resources to read more about the plugin system, so I won’t go into it here.

Migration plugins, unlike most others such as blocks and field types, are defined in YAML files inside the module’s migrations folder. But just like all other plugin types, they map to a plugin class, in this case Drupal\migrate\Plugin\Migration.

The more important thing to know about migrations, however, is the logical structure they follow. And by this I mean that each migration is made up of a source, multiple processors and a destination. Make sense right? You need to get some data (the source reads and interprets its format), prepare it for its new destination (the processors alter or transform the data) and finally save it in the destination (which has a specific format and behaviour). And to make all this happen, we have plugins again:

  • Source plugins
  • Process plugins
  • Destination plugins

Source plugins are responsible for reading and iterating over the raw data being imported. And this can be in many formats: SQL tables, CSV files, JSON files, URL endpoint, etc. And for each of these we have a Drupal\migrate\Plugin\MigrateSourceInterface plugin. For average migrations, you’ll probably pick an existing source plugin, point it to your data and you are good to go. You can of course create your own if needed.

Destination plugins (Drupal\migrate\Plugin\MigrateDestinationInterface) are closely tied to the site being migrated into. And since we are in Drupal 8, these relate to what we can migrate to: entities, config, things like this. You will very rarely have to implement your own, and typically you will use an entity based destination.

In between these two, we have the process plugins (Drupal\migrate\Plugin\MigrateProcessInterface), which are admittedly the most fun. There are many of them already available in core and contrib, and their role is to take data values and prepare them for the destination. And the cool thing is that they are chainable so you can really get creative with your data. We will see in a bit how these are used in practice.

The migration plugin is therefore a basic definition of how these other 3 kinds of plugins should be used. You get some meta, source, process, destination and dependency information and you are good to go. But how?

That’s where the last main bit comes into play: the Drupal\migrate\MigrateExecutable. This guy is responsible for taking a migration plugin and “running” it. Meaning that it can make it import the data or roll it back. And some other adjacent things that have to do with this process.

Migrate ecosystem

Apart from the Drupal core setup, there are few notable contrib modules that any site doing migrations will/should use.

One of these is Migrate Plus. This module provides some additional helpful process plugins, the migration group configuration entity type for grouping migrations and a URL-based source plugin which comes with a couple of its own plugin types: Drupal\migrate_plus\DataFetcherPluginInterface (retrieve the data from a given protocol like a URL or file) and Drupal\migrate_plus\DataParserPluginInterface (interpret the retrieved data in various formats like JSON, XML, SOAP, etc). Really powerful stuff over here.

Another one is Migrate Tools. This one essentially provides the Drush commands for running the migrations. To do so, it provides its own migration executable that extends the core one to add all the necessary goodies. So in this respect, it’s a critical module if you wanna actually run migrations. It also makes an attempt at providing a UI but I guess more of that will come in the future.

The last one I will mention is Migrate Source CSV. This one provides a source plugin for CSV files. CSV is quite a popular data source format for migrations so you might end up using this quite a lot.

Going forward we will use all 3 of these modules.

Basic migration

After this admittedly long intro, let’s see how one of these migrations looks like. I will create one in my advanced_migrations module which you can also check out from Github. But first, let’s see the source data we are working with. To keep things simple, I have this CSV file containing product categories:

id,label_en,label_ro B,Beverages,Bauturi BA,Alcohols,Alcoolice BAB,Beers,Beri BAW,Wines,Vinuri BJ,Juices,Sucuri BJF,Fruit juices,Sucuri de fructe F,Fresh food,Alimente proaspete

And we want to import these as taxonomy terms in the categories vocabulary. For now we will stick with the English label only. We will see after how to get them translated as well with the corresponding Romanian labels.

As mentioned before, the YAML file goes in the migrations folder and can be named advanced_migrations.migration.categories.yml. The naming is pretty straightforward to understand so let’s see the file contents:

id: categories label: Categories migration_group: advanced_migrations source: plugin: csv path: 'modules/custom/advanced_migrations/data/categories.csv' header_row_count: 1 keys: - id column_names: 0: id: 'Unique Id' 1: label_en: 'Label EN' 2: label_ro: 'Label RO' destination: plugin: entity:taxonomy_term process: vid: plugin: default_value default_value: categories name: label_en

It’s this simple. We start with some meta information such as the ID and label, as well as the migration group it should belong to. Then we have the definitions for the 3 plugin types we spoke about earlier:

Source

Under the source key we specify the ID of the source plugin to use and any source specific definition. In this case we point it to our CSV file, and kind of “explain” it how to understand the CSV file. Do check out the Drupal\migrate_source_csv\Plugin\migrate\source\CSV plugin if you don’t understand the definition.

Destination

Under the destination key we simply tell the migration what to save the data as. Easy peasy.

Process

Under the process key we do the mapping between our data source and the destination specific “fields” (in this case actual Drupal entity fields). And in this mapping we employ process plugins to get the data across and maybe alter it.

In our example we migrate one field (the category name) and for this we use the Drupal\migrate\Plugin\migrate\process\Get process plugin which is assumed unless one is actually specified. All it does is copies the raw data as it is without making any change. It’s the very most basic and simple process plugin. And since we are creating taxonomy terms, we need to specify a vocabulary which we don’t necessarily have to take from the source. In this case we don’t actually because we want to import all the term into the categories vocabulary. So we can use the Drupal\migrate\Plugin\migrate\process\DefaultValue plugin to specify what value should be saved in that field for each term we create.

And that’s it. Clearing the cache, we can now see our migration using Drush:

drush migrate:status

This will list our one migration and we can run it as well:

drush migrate:import categories

Bingo bango we have categories. Roll them back if you want with:

drush migrate:rollback categories Dynamic migration

Now that we have the categories imported in English, let’s see how we can import their translations as well. And for this we will use a dynamic migration using a “template” and a plugin deriver. But first, what are plugin derivatives?

Plugin derivatives

The Drupal plugin system is an incredibly powerful way of structuring and leveraging functionality. You have a task in the application that needs to be done and can be done in multiple ways? Bam! Have a plugin type and define one or more plugins to handle that task in the way they see fit within the boundaries of that subsystem.

And although this is powerful, plugin derivatives are what really makes this an awesome thing. Derivatives are essentially instances of the same plugin but with some differences. And the best thing about them is that they are not defined entirely statically but they are “born” dynamically. Meaning that a plugin can be defined to do something and a deriver will make as many derivatives of that plugin as needed. Let’s see some examples from core to better understand the concept.

Menu links:

Menu links are plugins that are defined in YAML files and which map to the Drupal\Core\Menu\MenuLinkDefault class for their behaviour. However, we also have the Menu Link Content module which allows us to define menu links in the UI. So how does that work? Using derivatives.

The menu links created in the UI are actual content entities. And the Drupal\menu_link_content\Plugin\Deriver\MenuLinkContentDeriver creates as many derivatives of the menu link plugin as there are menu link content entities in the system. Each of these derivatives behave almost the same as the ones defined in code but contain some differences specific to what has been defined in the UI by the user. For example the URL (route) of the menu link is not taken from a YAML file definition but from the user-entered value.

Menu blocks:

Keeping with the menu system, another common example of derivatives is the menu blocks. Drupal defines a Drupal\system\Plugin\Block\SystemMenuBlock block plugin that renders a menu. But on its own, it doesn’t do much. That’s where the Drupal\system\Plugin\Derivative\SystemMenuBlock deriver comes into play and creates a plugin derivate for all the menus on the site. In doing so, augments the plugin definitions with the info about the menu to render. And like this we have a block we can place for each menu on the site.

Migration deriver

Now that we know what plugin derivatives are and how they work, let’s see how we can apply this to our migration to import the category translations. But why we would actually use a deriver for this? We could simply copy the migration into another one and just use the Romanian label as the term name no? Well yes…but no.

Our data is now in 2 languages. It could be 23 languages. Or it could be 16. Using a deriver we can make a migration derivative for each available language dynamically and simply change the data field to use for each. Let’s see how we can make this happen.

The first thing we need to do is create another migration that will act as the “template”. In other words, the static parts of the migration which will be the same for each derivative. And as such, it will be like the SystemMenuBlock one in that it won’t be useful on its own.

Let’s call it advanced_migrations.migration.category_translations.yml:

id: category_translations label: Category translations migration_group: advanced_migrations deriver: Drupal\advanced_migrations\CategoriesLanguageDeriver source: plugin: csv path: 'modules/custom/advanced_migrations/data/categories.csv' header_row_count: 1 keys: - id column_names: 0: id: 'Unique Id' 1: label_en: 'Label EN' 2: label_ro: 'Label RO' destination: plugin: entity:taxonomy_term translations: true process: vid: plugin: default_value default_value: categories tid: plugin: migration_lookup source: id migration: categories content_translation_source: plugin: default_value default_value: 'en' migration_dependencies: required: - categories

Much of it is like the previous migration. There are some important changes though:

  • We use the deriver key to define the deriver class. This will be the class that creates the individual derivative definitions.
  • We configure the destination plugin to accept entity translations. This is needed to ensure we are saving translations and not source entities. Check out Drupal\migrate\Plugin\migrate\destination\EntityContentBase for more info.
  • Unlike the previous migration, we define also a process mapping for the taxonomy term ID (tid). And we use the migration_lookup process plugin to map the IDs to the ones from the original migration. We do this to ensure that our migrated entity translations are associated to the correct source entities. Check out Drupal\migrate\Plugin\migrate\process\MigrationLookup for how this plugin works.
  • Specific to the destination type (content entities) we need to import a default value also in the content_translation_source if we want the resulting entity translation to be correct. And we just default this to English because that was the default language the original migration imported in. This is the source language in the translation set.
  • Finally, because we need to lookup in the original migration, we also define a migration dependency on the original migration. So that the original gets run, followed by all the translation ones.

You’ll notice another important difference: the term name is missing from the mapping. That will be handled in the deriver based on the actual language of the derivative because this is not something we can determine statically at this stage. So let’s see that now.

In our main module namespace we can create this very simple deriver (which we referenced in the migration above):

namespace Drupal\advanced_migrations; use Drupal\Component\Plugin\Derivative\DeriverBase; use Drupal\Core\Language\LanguageInterface; use Drupal\Core\Language\LanguageManagerInterface; use Drupal\Core\Plugin\Discovery\ContainerDeriverInterface; use Symfony\Component\DependencyInjection\ContainerInterface; /** * Deriver for the category translations. */ class CategoriesLanguageDeriver extends DeriverBase implements ContainerDeriverInterface { /** * @var \Drupal\Core\Language\LanguageManagerInterface */ protected $languageManager; /** * CategoriesLanguageDeriver constructor. * * @param \Drupal\Core\Language\LanguageManagerInterface $languageManager */ public function __construct(LanguageManagerInterface $languageManager) { $this->languageManager = $languageManager; } /** * {@inheritdoc} */ public static function create(ContainerInterface $container, $base_plugin_id) { return new static( $container->get('language_manager') ); } /** * {@inheritdoc} */ public function getDerivativeDefinitions($base_plugin_definition) { $languages = $this->languageManager->getLanguages(); foreach ($languages as $language) { // We skip EN as that is the original language. if ($language->getId() === 'en') { continue; } $derivative = $this->getDerivativeValues($base_plugin_definition, $language); $this->derivatives[$language->getId()] = $derivative; } return $this->derivatives; } /** * Creates a derivative definition for each available language. * * @param array $base_plugin_definition * @param LanguageInterface $language * * @return array */ protected function getDerivativeValues(array $base_plugin_definition, LanguageInterface $language) { $base_plugin_definition['process']['name'] = [ 'plugin' => 'skip_on_empty', 'method' => 'row', 'source' => 'label_' . $language->getId(), ]; $base_plugin_definition['process']['langcode'] = [ 'plugin' => 'default_value', 'default_value' => $language->getId(), ]; return $base_plugin_definition; } }

All plugin derivers extend the Drupal\Component\Plugin\Derivative\DeriverBase and have only one method to implement: getDerivativeDefinitions(). And to make our class container aware, we implement the deriver specific ContainerDeriverInterface that provides us with the create() method.

The getDerivativeDefinitions() receives an array which contains the base plugin definition. So essentially our entire YAML migration file turned into an array. And it needs to return an array of derivative definitions keyed by their derivative IDs. And it’s up to us to say what these are. In our case, we simply load all the available languages on the site and create a derivative for each. And the definition of each derivative needs to be a “version” of the base one. And we are free to do what we want with it as long as it still remains correct. So for our purposes, we add two process mappings (the ones we need to determine dynamically):

  • The taxonomy term name. But instead of the simple Get plugin, we use the Drupal\migrate\Plugin\migrate\process\SkipOnEmpty one because we don’t want to create a translation at all for this record if the source column label_[langcode] is missing. Makes sense right? Data is never perfect.
  • The translation langcode which defaults to the current derivative language.

And with this we should be ready. We can clear the cache and inspect our migrations again. We should see a new one with the ID category_translations:ro (the base plugin ID + the derivative ID). And we can now run this migration as well and we’ll have our term translations imported.

Other examples

I think dynamic migrations are extremely powerful in certain cases. Importing translations is an extremely common thing to do and this is a nice way of doing it. But there are other examples as well. For instance, importing Commerce products. You’ll create a migration for the products and one for the product variations. But a product can have multiple variations depending on the actual product specification. For example, the product can have 3 prices depending on 3 delivery options. So you can dynamically create the product variation migrations for each of the delivery option. Or whatever the use case may be.

Conclusion

As we saw, the Drupal 8 migration system is extremely powerful and flexible. It allows us to concoct all sorts of creative ways to read, clean and save our external data into Drupal. But the reason this system is so powerful is because it rests on the lower-level plugin API which is meant to be used for building such systems. So migrate is one of them. But there are others. And the good news is that you can build complex applications that leverage something like the plugin API for extremely creative solutions. But for now, you learned how to get your translations imported which is a big necessity.

Categories: Drupal

OSTraining: Give a Unique Look to Your Google Maps in Drupal

12 June 2019 - 11:44pm

Google Maps don't look appealing or pretty by default when you embed them in your Drupal content. Nor do they always nicely coordinate with your site look and feel.

What if you found a way to give them a custom design? For example - your own color? In this tutorial, you will learn how to give your Drupal Google Maps a custom style with the Styled Google Map contrib module.

Categories: Drupal

Drupal Association blog: Drupal Association Board Elections, 2019

12 June 2019 - 11:42pm

With Drupal 9 approaching rapidly, it is an exciting time to be on the Drupal Association Board. The Association must continue to evolve alongside the project so we can continue providing the right kind of support. And, it is the Drupal Association Board who develops the Association’s strategic direction by engaging in discussions around a number of strategic topics throughout their term. As a community member, you can be a part of this important process by becoming an At-large Board Member.

We have two At-large positions on the Association Board of Directors. These positions are self-nominated and then elected by the community. Simply put, each At-large Director position is designed to ensure there is community representation on the Drupal Association Board.

Inclusion 2018

In 2018, we made a special effort to encourage geographic inclusion through the people who were candidates for election and we were delighted that candidates stood in six continents all across the World — thank you!

2019

Now, in 2019, and recognising we are in the middle of Pride Month, we want to particularly encourage nominations from candidates from underrepresented or marginalised groups in our community. As referenced later in this blog post, anyone is eligible to nominate themselves, and voters can vote for whichever candidate they choose, but we want to encourage this opportunity to amplify the voices of underrepresented groups with representation on the Association Board. And as we meet the candidates, whether they are allies or members of these groups themselves, we hope to center issues of importance to these communities - in addition to the duties of care for the management of the Association that are always central to a board role.

As always, any individual can stand for election to the board, but by centering these important issues we are determined to encourage a board made of diverse members as that gives them the best ability to represent our diverse community.

If you are interested in helping shape the future of the Drupal Association, we encourage you to read this post and nominate yourself between 29 Jun, 2019 and 19 July 2019.

What are the Important Dates?

Self nominations: 29 Jun, 2019 to 19 July, 2019

Meet the candidates: 22 July, 2019 to 26 July, 2019

Voting: 1 August, 2019 to 16 August, 2019

Votes ratified, Winner announced: 3 September, 2019

How do nominations and elections work?

Specifics of the election mechanics were decided through a community-based process in 2012 with participation by dozens of Drupal community members. More details can be found in the proposal that was approved by the Drupal Association Board in 2012 and adapted for use this year.

What does the Drupal Association Board do?

The Board of Directors of the Drupal Association are responsible for financial oversight and setting the strategic direction for serving the Drupal Association’s mission, which we achieve through Drupal.org and DrupalCon. Our mission is: “Drupal powers the best of the Web.  The Drupal Association unites a global open source community to build and promote Drupal.”

New board members will contribute to steer? shape? the strategic direction of the Drupal Association. Board members are advised of, but not responsible for, matters related to the day-to-day operations of the Drupal Association including program execution, staffing, etc.

Directors are expected to contribute around five hours per month and attend three in-person meetings per year (financial assistance is available if required).

Association board members, like all board members for US-based organizations, have three legal obligations: duty of care, duty of loyalty, and duty of obedience. In addition to these legal obligations, there is a lot of practical work that the board undertakes. These generally fall under the fiduciary responsibilities and include:

  • Overseeing Financial Performance

  • Setting Strategy

  • Setting and Reviewing Legal Policies

  • Fundraising

  • Managing the Executive Director

To accomplish all this, the board comes together three times a year during two-day retreats. These usually coincide with the North American and major European Drupal Conferences, as well as one February meeting. As a board member, you should expect to spend a minimum of five hours a month on board activities.

Some of the topics that will be discussed over the next year or two are:

  • Strengthen sustainability

  • Grow Drupal adoption through our channels and partner channels

  • Evolve drupal.org and DrupalCon goals and strategies.

Who can run?

There are no restrictions on who can run, and only self-nominations are accepted.

Before self-nominating, we want candidates to understand what is expected of board members and what types of topics they will discuss during their term. That is why we now require candidates to:

What will I need to do during the elections?

During the elections, members of the Drupal community will ask questions of candidates. You can post comments on candidate profiles here on assoc.drupal.org.

In the past, we held group “meet the candidate” interviews. With many candidates the last few years, group videos didn’t allow each candidate to properly express themselves. We replaced the group interview and allow candidates to create their own 3-minute video and add it to their candidate profile page. These videos must be posted by 19 July, 2019, and the Association will promote the videos to the community from 22 July, 2019. Hint: Great candidates would be those that exemplify the Drupal Values & Principles. That might provide structure for a candidate video? You are also encouraged to especially consider diversity and inclusion.

How do I run?

From 29 June, 2019, go here to nominate yourself.  If you are considering running, please read the entirety of this post, and then be prepared to complete the self-nomination form. This form will be open on 29 June, 2019 through 19 July, 2019 at midnight UTC. You'll be asked for some information about yourself and your interest in the Drupal Association Board. When the nominations close, your candidate profile will be published and available for Drupal community members to browse. Comments will be enabled, so please monitor your candidate profile so you can respond to questions from community members. We will announce the new board member via our blog and social channels on 3 September, 2019.

Reminder, you must review the following materials before completing your candidate profile:

Who can vote?

Voting is open to all individuals who have a Drupal.org account by the time nominations open and who have logged in at least once in the past year. If you meet this criteria, your account will be added to the voters list on association.drupal.org and you will have access to the voting.

To vote, you will rank candidates in order of your preference (1st, 2nd, 3rd, etc.). You do not need to enter a vote on every candidate. The results will be calculated using an "instant runoff" method. For an accessible explanation of how instant runoff vote tabulation works, see videos linked in this discussion.

Elections process

Voting will be held from 1 August, 2019. During this period, you can review and comment on candidate profiles on assoc.drupal.org.

Finally, the Drupal Association Board will ratify the election and announce the winner on 3 September, 2019.

Have questions? Please contact Drupal Association Community Liaison, Rachel Lawson.

Finally, many thanks to nedjo for pioneering this process and documenting it so well!

Categories: Drupal

OSTraining: How to Log In to Drupal Without the Login Block

12 June 2019 - 10:09pm

This is actually quite a common question from our students. They start building their Drupal site. Then they go to work with their blocks or menus.

Then they accidentally disable the "Log in" menu link. There is no "Log in" link displayed on the site anymore. Neither for them nor for their visitors.

In this short tip, you will learn how to login to your Drupal admin page in such situation. 

Categories: Drupal

heykarthikwithu: Perform HTTP request in Drupal 7

12 June 2019 - 10:02pm
Perform HTTP request in Drupal 7

To Perform an HTTP request in Drupal 7 we can use "drupal_http_request" function. This is a flexible and powerful HTTP client implementation. Correctly handles GET, POST, PUT or any other HTTP requests. Handles redirects.

heykarthikwithu Thursday, 13 June 2019 - 10:32:53 IST
Categories: Drupal

Evolving Web: Top 4 Takeaways from Drupal North Day 1

12 June 2019 - 8:48pm

Today marked the kick-off of Drupal North 2019, and Evolving Web is excited to be a part of it for the 4th year in a row. Day 1 was packed with trainings, summits (for the 1st time!), and networking opportunities. Here were the key takeaways we saw:

Drupal is for everyone

In the "What is Drupal?" and "Qu'est-ce que c'est Drupal?" trainings by Evolving Web's own Trevor Kjorlien and Adrian Cid Almaguer, everyone from developers, to project managers, to graphic designers and more, took part in a hands-on demonstration on how to build a site with Drupal.

Nobody wants a website

A website is just a tool for you to achieve your larger goals. Whether that be building a community, selling a product, getting donations, providing information, or anything else, your website has to be designed with your goals in mind. That being said:

Focus on what your audience wants, not what you want

Your website should always be making your audience's life easier and give them what they are looking for as quickly as possible. It's important to step out of your own shoes and into theirs in order to have a good understanding of want they want so you can cater to those needs.

Students really love chocolate

While sharing her experiences in getting students to participate in UX/UI studies, Joyce Peralta from McGill University explained that sometimes it's the small incentives that can be the most effective. Through many attempts, she found that students could be easily swayed by a simple table full of chocolate bars situated in a prime location in the library. Simple but effective!

Drupal North started off on a great foot and we're looking forward to the next two days of sessions. If you're attending, make sure to check out presentations from our team:

+ more awesome articles by Evolving Web
Categories: Drupal

Jacob Rockowitz: Webform Open Collective Office Hours

12 June 2019 - 11:08am

In my post, Drupal is frustrating, I stated that enterprise websites need, want, and are willing to pay for better support options when using Open Source software. Organizations have reached out to me as a Webform module subject matter expert (SME) seeking to start a 1-to-1 support relationship. Occasionally, these relationships result in a sponsored feature request. Sometimes organizations want to ask me a simple question or at least know that I am available to answer questions. In the past, I shied away from the idea of setting up regular office hours because it would be an unpaid commitment of my time during business hours. Fortunately, with the existing funds collected by the Webform module's Open Collective, I feel that now is a good time to experiment and set up some initial office hours for the Webform module.

About office hours

The goal of office hours is to make it easier for me to help people and organizations with questions and issues related to the Webform module for Drupal 8 as well as to assist current and future Webform module contributors.

Sponsor office hours

Sponsor office hours are intended to help backers of the Webform module's Open Collective with any Webform related questions or challenges. These office hours will be strictly for monthly sponsors and backers of the Webform module's Open Collective.

Add-ons office hours

Add-ons office hours are for anyone in the Drupal community building Webform add-ons and extensions that are being contributed back to the open source community. The goal of these hours is to help support and improve the quality of the projects and community around the Webform module.

Office hour guidelines

I've been...Read More

Categories: Drupal

Palantir: Leading Patient Engagement Solutions Company

12 June 2019 - 10:58am
Leading Patient Engagement Solutions Company brandt Wed, 06/12/2019 - 12:58

Content modeling as a practical foundation for future scalability in Drupal.

Content modeling as a practical foundation for future scalability On

Palantir recently partnered with a patient engagement solutions company that specializes in delivering patient and physician education to deliver improved health outcomes and an enhanced patient experience. They have an extensive library of patient education content that they use to build education playlists which are delivered to more than 51,000 physician offices, 1,000 hospitals, and 140,000 healthcare providers - and they are still growing.

The company is in the process of completely overhauling their technical stack so that they can rapidly scale up the number of products they use to deliver their patient education library. Currently, every piece of content needs to be entered separately for each product it can be delivered on, which forces the content teams to work in silos. In addition, because they use a dozen different taxonomies and doing so correctly requires a high level of context and nuance, any tagging of content can only be done at the manager level or above. The company partnered with Palantir.net to remove these bottlenecks and plan for future scalability.

Key Outcome

Palantir teamed up with this patient engagement solutions company to develop a master content model that:

  • Captures key content types and their relationships
  • Creates a standardized structure for content, including fields that enable serving content variations based on end-point devices and localization
  • Incorporates a taxonomy that enables content admins to quickly filter and select content relevant to their needs and device
Enabling Scalable Growth

The company’s content library is only getting larger over time, so the core need driving the master content model is to enable scalable growth. Specifically, that means a future state where:

  • New products can be added and old products deprecated without restructuring content. 
  • Content filtering can scale up for new product capabilities, languages, and specialties without having to be fundamentally reworked. 
  • Clients using the taxonomy find it intuitive and require minimal specific training to create and amend their own patient education playlists. 

These principles guided our recommendations for the content model and taxonomy.

Content Model

Our client’s content model is currently organized by the end product that content is delivered through - for example, a waiting room screen vs. an interactive exam room touchscreen. This approach requires the digital team to enter the same piece of content multiple times.

To streamline this process for the team, we recommended a master content model that is organized by the purpose of the content, including the mindset of the audience and the high-level strategy for delivering value with that content.

For example, a “highlight” is a small piece of content intended to engage the audience and draw them into deeper exploration, while a “quiz” is a test of knowledge of a particular topic as training or entertainment.

This approach allows the company to separate the content types from products, which in turn makes them easier to scale. For example, this wireframe shows how a single piece of quiz content can be delivered on a range of endpoint devices depending on which fields that device uses. This approach allows us to show how a quiz might be delivered on a voice device, which is a product the company does not yet support, but could in the future.

“Our content is tailored to different audiences with different endpoints. Palantir took the initiative to not only learn about all of our content paths, but to also learn how our content managers interact with it on a daily basis. We’ve relied heavily on their expertise, especially for taxonomy, and they delivered.”

Executive Vice President, Content & Creative

Taxonomy

The company’s taxonomy has 12 separate vocabularies, and using them to construct meaningful content playlists requires a deep understanding of both the content and the audience. Existing content has been tagged based on both the information it contains and based on the patients to whom it would be relevant.

For example, a significant proportion of cardiology patients are affected by diabetes, so a piece of content titled "Healthy Eating with Diabetes" would be tagged with both "Diabetes" and "Cardiology". Additionally, many tags have subtle differences in how they are used — when do you use "cardiology" vs. "cardiovascular conditions"? "OB/GYN" vs. "Women's Health"?

This system requires that everyone managing the content — from content creators to healthcare providers and staff selecting content to appear in their medical practice — understand the full set of terms and the nuance of how they are applied in order to tag content consistently.

Our goal was to develop a taxonomy that can be used to filter content effectively without requiring deep platform-specific context and nuance.

Our guiding principles were to:

  • Tag based on the information in the content.
  • Use terms that are meaningful to a general audience.
  • Use combinations of tags to provide granularity.
  • Avoid duplicate information that is available as properties of the content

We ultimately recommended a set of eight vocabularies. Two of them are based on company-specific business processes, and the remaining six are standards-based so that any practitioner can use them. By using combinations of terms, users can create playlists that are balanced in terms of educational and editorial content.

For example, in our recommended taxonomy, relevant content is tagged as referencing diabetes, so that the person building the playlist can still construct effective content playlists, without needing to carry in their head the nuance that many cardiology patients are also diabetic.

Moving Forward With Next Steps

This content modeling engagement spanned 9 weeks, and the Palantir team delivered:

  • A high-level content model identifying the core content types and their relationships
  • A set of global content fields that all content types in the model should have
  • A field level content model for the four most important content types
  • A new taxonomy approach based on internal user testing
  • A Drupal Demo code base showing how the content types and taxonomy can be built in Drupal 8

 

In the future, the company’s ultimate goal for the platform is to scale their engagement offerings with new content and new technology. With our purpose-driven content model and refined taxonomy, the company can scale their business by breaking down internal content silos and making tagging and filtering content consistent and predictable for their internal team and eventually, their customers. Palantir’s master content modeling work forms a practical foundation for the company’s radical re-platforming work.

Categories: Drupal

Sooper Drupal Themes: Open Source Software: Here is why it's OUR future

12 June 2019 - 7:16am
The World is Moving Towards Open Source Software

Open source software has been around for some time now. When it first came out, open source software was perceived as risky and immature. However, with the passage of time, more and more companies started developing and building upon open source. A couple of great open source examples that have been pioneering the industry for a while now are Drupal CMS and Linux OS.

What is Open Source Software?

So, what exactly is open source software? Well, open source describes the type of software that has no proprietary license attached to it. Instead, it's published with a license that guarantees the software will forever be free to download, distribute, and use. This also means that unlike proprietary software, the code can be inspected by anybody. On top of that, if somebody wants to customize the code to their needs by changing it, they are free to do it.

Proprietary software is often the exact opposite. The code of proprietary software cannot be copied and distributed freely, modifications to the code are also prohibited, in case there are issues arising, you cannot fix them by yourself. You have to rely on the software vendor to fix the problem for you.

Open source has its set of advantages as well as its disadvantages. 

Advantages of Open Source Software

So, you might wonder what are the specific advantages of open source as opposed to software with a proprietary license. Here are some advantages:

  • Flexibility: Open source software is known for having great flexibility. The great flexibility is granted by the fact that the code is open. Thus, people are able to customize it to their needs.

  • Speed: Competition in the digital era is fiercer than ever before. One of the defining factors that are dictating the success of a company over its competition is the speed of innovation. Luckily, the companies that are using open source software know that open source facilitates speed. By not having to deal with the bureaucracy that comes when dealing with proprietary software, everything can be set-up to be working in a fast and reliable way.

  • Cost efficiency: Another trump card in the arsenal of open source software is the cost efficiency provided. Open source can be used by anyone free of charge because it is registered under the GNU General Public License which basically ensures that if somebody is using open source software, then they also have to make the code available for other people to be able to use it. Successful open source communities leverage the power of the community by providing good infrastructure for the community to share and review software extensions and improvements.

  • Security: Proprietary software has had a reputation of being more secure than the open source counterpart. Part of this was due to the popular belief that if the source code is hidden from the public, then hackers will have a harder time cracking it. However, this is far from the truth. The code for open source software is available for everybody to see, which, in turn, could make it more vulnerable. However, because of the fact that everyone has access to it, it is easier to peer review the code. In this way, people will be able to spot vulnerabilities way easier than with proprietary code, making it easier for developers to fix said vulnerabilities.

Disadvantages of Open Source Software

Now that we’ve talked about the advantages of open source, we should also discuss its shortcomings.

  • Not user-friendly: A common problem with open source projects is a lack of focus on design and user-friendliness. People might have a harder time being able to adapt to the interface of an open source software compared to competing proprietary platforms. Of course, this is not true for all open source projects, but it is common to see that well-funded companies are better able to attract and afford the best designers.

  • Hidden costs: Although open source software is hailed to be free to use, it actually is not. When adopting new software for a business, a decision maker also has to take into account different factors. For example, it is easy to overlook the cost of setting up and customizing the software for the company, paying for the training of the employees or hiring skilled personnel that is able to actually operate the software. Even if the adoption is not for business use, a time investment still has to be made in order to properly be able to use the software to its full potential.

  • Lackluster support:  When it comes to proprietary software, there are often dedicated departments that are ready to help a struggling user with their issues. In contrast, most open source software does not enjoy the same level of support. However, open source tends to gather dedicated communities around it that can be helpful in solving some issues. However, it’s good to keep in mind that these people are not paid for their service and might not be able to solve all the issues that are arising.

  • Orphan Software: Proprietary software can enjoy a longer lifespan than their open source counterparts. One of the risks of using OSS is that the community or developers or both lose interest in the project or move on to another project. What this means is that the software will stop being developed supported. The users of the software will be left high and dry and will have to migrate to another platform. Of course, there are also plenty of commercial software projects that go out of business, but strong commercial backing does increase confidence in the continuity of the software. Some open source projects have loosely associated commercial backing. Like Redhat backing Linux and Acquia backing Drupal.

Tech Giants buy Open Source Software Companies

Lately, more and more tech giants are willing to start having some presence on the open source market. A couple of these examples are IBM, AT&T and Microsoft.

IBM acquires Red Hat

On 28 October 2018, IBM acquired Red Hat for $34 billion, a gargantuan amount of money. The aim of this acquisition is for IBM to shape the cloud and open source market for the years to come. IBM is betting a lot of money on this acquisition, in order to secure a lead on the market. However, there are some skeptics of this acquisition. They claim that IBM is going to ruin the Red Hat culture, as it was proven by their track record until now, kind of like some sort of corporate colonization. Only time will tell how this acquisition is going to shape the future of open source software. Nevertheless, the willingness of IBM to dish out so much money proves that open source software is seriously a path of the future.

AT&T acquires AlienVault

AlienVault is a developer of an open source solution that manages cyber attacks. It includes the Open Threat Exchange which is the world's largest crowd-sourced computer security platform. It was acquired by AT&T on August 22 in 2018. Since then it was renamed from AlienVault to AT&T Cybersecurity. With the high reach and resources of AT&T, former AlienVault is sure to have a bigger impact on the cyber safety of the world. However, this acquisition sparked a lot of controversies, mainly with some supporters of AlienVault claiming that this is the end for the brand. Well, this is true since the company was renamed to AT&T Cybersecurity. However, time will tell if there are going to be more radical changes to their business model under the ownership of AT&T.

Acquia acquires Mautic

With the acquisition of the open source marketing automation tool Mautic on 8 May 2019, Acquia is aiming to strengthen its presence on the open source software scene. Together with Mautic, Acquia is going to deliver the only open source solution to proprietary alternatives, expanding on Acquia's vision to deliver the industry's first Open Digital Experience Platform.  On top of that, unlike the other two companies, Acquia has a strong open source culture, making the acquisition of Mautic a well-thought business decision.

Apps, Plug-ins, and Services: When Open Source  Mingles With Closed Source Software Android, Google, and Huawei

Android is an open source operating system for mobile phones. Formally, it is known as the AOSP (Android Open Source Project). It is a project developed by Google. The OS is based on a modified version of the Linux kernel and is designed primarily for touchscreen mobile devices. It is licensed under Apache 2.0 which makes it possible for users to modify and distribute modifications if they choose to. Even so, in the recent case of the U.S. ban of Huawei, Google announced the new trade embargo forced them to retract Huawei's Android license. Now, since Android is open source, the OS itself is still free to use. However, practically all Android devices outside of China come with Google services and apps pre-installed. These Google apps play an important role in any Android device. Google can do this since apps like Google Maps, Youtube, Gmail and Play Store, etc. are not open source and companies need a license agreement in order to have them on their device. The Google play store is also a paid service, it provides security checks and code validation for app updates. This forms a very important security layer on the Android platform.

To add insult to injury, losing the partnership with Google means Huawei will not get timely security updates to the AOSP Android Platform. When Google fixes vulnerabilities, they will first send out their fix to partners, and after partners have had time to publish the update to their devices the patch will become public. This means Huawei's devices will have increased exposure to hackers and viruses before the security patch is published and pushed to Huawei devices.  

Sooperthemes: Providing and Supporting Paid Drupal Extensions

Here at Sooperthemes, we are passionate about the Drupal project. We want to see Drupal thrive and become better than its competitors. In order to do that, we had to find out what are the areas in which Drupal can be improved. As it turns out, there was a strong need for Drupal to be easier to navigate and to use in site-building for users who are in a marketing or communication department and do not have deep technical knowledge. That's why Sooperthemes has developed Glazed Builder. Glazed Builder is a powerful visual page-builder that anyone can use, without needing to write, or see any code. With Glazed Builder, Sooperthemes wants to give accessibility to the power of Drupal to a wider audience and to make it easy for them to build, maintain, and grow a Drupal-based website. 

Although other open source platforms like Android, WordPress, and even Linux OS have had a thriving ecosystem of paid applications and plugins for many years, the same cannot be said for Drupal. Fortunately, with our 13+ years of experience in the Drupal community, we were able to create a combination of product and service that thrives in the Drupal community.  

Conclusion

As it can be seen by the latest trends, open source seems to be here to stay and to become the staple of software in the near future. This prediction is based not only on the benefits that open source software is bringing but also by the amount of interest that major companies in the tech world are showing towards open source software. The most successful recipe seems to be a mix of open source platform and paid-for applications. The paid-for applications are especially handy for components that require more involvement from marketing and UX design experts, who are not typical contributors in open source software communities.

Categories: Drupal

Pages