Newsfeeds

Klipfolio field

New Drupal Modules - 10 January 2019 - 4:57am

The module provides a field for Klipfolio Klips.

Categories: Drupal

Scroll Up

New Drupal Modules - 10 January 2019 - 3:24am

Scroll up allows users to scroll page back to the top of the screen using a button. Scroll up uses Simple JQuery for smooth page scrolling.

Module provides configuration for:

  • Theme-wise visibility of Scroll button
  • positioning scroll button on bottom of the page
  • Speed of the scrolling
  • Amount of scroll after which scroll button should be appear
  • Background color and hover of the scroll button
Categories: Drupal

Digital Echidna: Thoughts on all things digital: Join Us for Drupal Global Contribution Weekend

Planet Drupal - 10 January 2019 - 3:01am
Drupal. It’s been the foundation of our solutions for a few years now and it powers some of the top sites around the world in fields ranging from commerce to government. If you’ve ever been interested in getting your feet wet with the CMS, or…
Categories: Drupal

GDPR Cookie Compilance

New Drupal Modules - 10 January 2019 - 2:24am

WHAT IS GDPR?
General Data Protection Regulation (GDPR) is a European regulation to strengthen and unify the data protection of EU citizens. (https://www.eugdpr.org/)


Synopsis

This module provide you user alert regarding cookie allows policy.
Installation
For installing the module:

Copy the entire cookie_gdpr directory the Drupal /modules/custom directory.
Login as an administrator, Enable the module.
For removing the module:

Categories: Drupal

OpenSense Labs: The SIWECOS: German Government Sponsored CMS Security

Planet Drupal - 10 January 2019 - 1:34am
The SIWECOS: German Government Sponsored CMS Security Vasundhra Thu, 01/10/2019 - 17:49

Website owners are often trapped inside an imaginary bubble where they make conclusions like “There are more valuable sites in the web world, why would mine be targeted by the hackers?” 

And Alas the bubble is busted when they observe that hackers have attacked their site because let's face it- they would never discriminate between any choice they are getting. They want a website to attack, and they have it.

For opensource CMS like Drupal, WordPress, and Joomla, the scenario is the same. As popular as these platforms are, they are the targets of all sorts of attacks. Cybercriminals discover the security loopholes and hack your website in no time.


Which leaves us with the assumption that these platforms ( which together conquer 68.5% of the CMS market) must be providing some form of protection. 

And yes, the assumptions are true.  

Birth of SIWECOS 

SIWECOS project or the “Secure Websites and Content Management Systems” project is the security project which is funded by the German ministry of Economics that desires to improve the security of the CMS based websites ( which of course includes Drupal, WordPress, Joomla, and many others)  

The project was designed to help small and medium-sized enterprises (SMEs) identify and correct the security loopholes that they witness on their websites. It focused on concrete recommendations of action in the event of damage and also taking care of sensitizing SMEs to cybersecurity.

The utilization of the vulnerability scanner in the project helped SMEs to regularly check the server system and made them acquaint well with the vulnerability that might occur in a web application. Not only this but a service for web hosts were also presented which actively communicated with acute security vulnerabilities and offered filtering capabilities to prevent cyber attacks. 

The end users were also protected with potential data losses as well as financial losses.  Initiative-S 

The aim of SIWECOS in longer run was to increase web security and raise a proper awareness of the relevance of IT security for SMEs. Thus, Initiative-S came out as a ray of hope for the support of the small and medium-sized enterprise. It was a government-funded project which was built by the initiative, the association of the German internet industry echo. 

The association built a web interface called “clamavi”. This was done for the users to grant them with the ability to enter their domain and conduct a malware scan of the source code once per day. Thus the website check of Initiative-S was integrated into the new project of SIWECOS. The proven Initiative-S technology now supplements the portfolio of the new SIWECOS service with a check for possible malware infestation.

Importance of the Project 

As mentioned, the whole project revolved around the security of the CMS platform, Since the time it was started, the project took 2 years to complete. The mission was to introduce the end users with:

  • Importance of security in cooperation and provided the end users with individual notifications and recommendation on security issue of a website.
  • Increase in web security for a longer period and to identify and address security vulnerabilities of their website.
  •  The project helped ordinary users patch more quickly. Patching is the application of updates (patches) to existing code that either increase the functionality or correct patch vulnerabilities.
  • It also scanned registered user websites. If any security vulnerabilities were found then the person in the field of IT security was contacted directly.

What does SIWECOS have in General?

SIWECOS, in general, had three things 

Awareness Building

It is the detailed version of the introduction and the process on how to subscribe it. They reached out to the end users that not only included the site owners but also the ones that have to maintain it later. The major purpose of the awareness campaign was to influence the behavior of the users since improvements cannot take place without changes in their attitudes and perceptions.

Skinning Service

The whole scanning system in Skinning Service is based on an API which is an open source that is embedded inside. It gave the end users with score count between zero and hundred to give them an idea on how secure or insecure the setup is.

Behind the score, there were five scanners which were used to check malware in the HTML code. Scanners like:

  • HTTP Header Scanner

Ensures that your server conveys the browser to enable security features.

  • Info leak Scanner

Verifies if the site exposes security-relevant information.

  • TLS scanner

Checks the HTTPs encryption for known issues, outdated certificates, chain of trust etc

  • Initiative S Scanner 

This scanner checks the website for viruses or looks for third-party content such as phishing.

  • DOMXSS Scanner

This scanner verifies that the website is protected against DOMXSS attacks. 

Web Host

The companies that power the service behind the website are likely to be called as web hosts. Web hosts team generally should have all the basic technical knowledge, security awareness and should have an active communication of filter rules to defend against attacks.

The need for Filter rules - to limit the circle of recipients. 

Firewall rules made it easy for experienced attackers to build and exploit the website as they want. Thus, by filtering incoming and outgoing network traffic (based on the set of user-defined rules) there was a reduction in unwanted network communication.

Another reason to use web host was server-side protection. The server- side was protected against all these attacks on the web pages that were installed in the web hoster. This was done to protect web page operators.

Partner in the Project

SIWECOS project included four partners mainly that contributed highly to the project. The four partners were:

Eco

Eco or electronic commerce is the largest association of the internet industry in Europe. The association sees itself as the representation of the interests of the internet economy and has set itself with the goal of promoting technologies, shaping framework conditions and representing the interests of its members. The Eco group includes all the internet industry and promotes current and future internet topics. 

The awareness building section was mainly done by eco association because of the fact that they were really good at marketing and networking. 
 



RUB 

The Ruhr-University Bochum, located on the southern hills of central Ruhr area Bochum, is one of the partners in the whole project. It has one of the greatest and most proven track records in the general IT security industry. They were included in the project with the agenda of building a scanning engine that gave the business owners feedback about potential security problems on their site such as SSL misconfiguration or vulnerability to cross-site scripting attacks.



HACKMANIT

Hackmanit GmbH was founded by IT security experts that were from Ruhr University Bochum. They have an international publication of XML security, SSL/TLS, single sign-on, cross-site scripting, and UI redressing. The priorities of the company were designed by high-quality penetration testing, hands-on training, and tailor-made expertise. The organization has in-depth knowledge about the security of web application, web services, and applied cryptography. The team offers a white box and black box tests which protects the application from the effects of all sorts of hackers attack.



CMS Graden 

The CMS garden is the umbrella organization of the most relevant and active open source content management system. In other words, the security team started with CMS planning in 2013 by making a shoutout to the CMS community to join the team. Surprisingly, there were CMS platforms which were interested. Thus, by 2013, there were 12 open source CMS systems in one place. 

CMS garden also contributes to a series of plugins for different open source CMSes that provides feedbacks from within the CMS management interface so that the site owners have the ability to act immediately when they encounter with any security vulnerability. 
 

In the End 

Website attacks and cyber attacks are rapidly growing. These attacks cost the organizations millions of dollars, subject them to the lawsuit and ruin their lives. 

SIWECOS is like a shield for all the websites and the CMS platforms, it protects them against cyber attacks and hackers of all sort, helping in keeping up with the security and protection against vulnerabilities. 

We know how important web security is to protect your online identity and personal information. If you’re concerned about your web security for your business, or other network issues, our services can help. Contact us on hello@opensenselabs.com the professionals would guide you with all your queries and questions and help you leverage security for your website.

blog banner blog image Drupal Drupal 8 CMS SIWECOS Security Initiative- S Scanner Eco RUB Hackmanit CMS Garden Protection Blog Type Articles Is it a good read ? On
Categories: Drupal

Sandy's Soapbox: Pep Talk for College-as-Video-Game

RPGNet - 10 January 2019 - 12:00am
Is college an easier game than high school?
Categories: Game Theory & Design

Drupal Mountain Camp: Drupal Mountain Camp 2019 - Open Source on top of the World - Davos, Switzerland, March 7-10

Planet Drupal - 9 January 2019 - 11:41pm
Drupal Mountain Camp 2019 - Open Source on top of the World - Davos, Switzerland, March 7-10 admin Thu, 01/10/2019 - 08:41 Preview

Introduction

Drupal Mountain Camp brings together experts and newcomers in web development to share their knowledge in creating interactive websites using Drupal and related web technologies. We are committed to unite a diverse crowd from different disciplines such as developers, designers, project managers as well as agency and community leaders.

Keynotes The future of Drupal communities

For the first keynote, Drupal community leaders such as Nick Veenhof and Imre Gmelig Meijling will discuss about successful models to create sustainable open source communities and how we can improve collaboration in the future to ensure even more success for the open web. This keynote panel talk will be moderated by Rachel Lawson.

Drupal Admin UI & JavaScript Modernisation initiative

In the second keynote Matthew Grill, one of the Drupal 8 JavaScript subsystem maintainers, will present about the importance and significance of the Admin UI & JavaScript Modernisation initiative and Drupal’s JavaScript future.

Sessions

In sessions, we will share the latest and greatest in Drupal web development as well learn from real world implementation case studies. Workshops will enable you to grow your web development skills in a hands-on setting. Sprints will teach you how contributing to Drupal can teach you a lot while improving the system for everyone.

Swiss Splash Awards

As a highlight, the Swiss Splash Awards will determine the best Swiss Drupal web projects selected by an independent jury in 9 different categories. These projects will also participate in the global Splash Awards at DrupalCon Europe 2019.

Location

Drupal Mountain Camp takes place at Davos Congress. As tested by various other prominent conferences and by ourselves in 2017, this venue ensures providing a great space for meeting each other. We are glad to be able to offer conference attendees high quality equipment and flawless internet access all in an inspiring setting. Davos is located high up in the Swiss alps, reachable from Zurich airport within a beautiful 2 hours train ride up the mountains.

The camp

The Drupal Mountain Camp is all about creating a unique experience, so prepare for some social fun activities. We’ll make sure you can test the slopes by ski and snowboard or join us for the evening activities available to any skill level such as sledding or ice skating.

Tickets

Drupal Mountain Camp is committed to be a non-profit event with early bird tickets available for just CHF 80,- covering the 3 day conference including food for attendees. This wouldn't be possible without the generous support of our sponsors. Packages are still available, the following are already confirmed: Gold Sponsors: MD Systems, platform.sh, Amazee Labs. Silver: soul.media, Gridonic, Hostpoint AG, Wondrous, Happy Coding, Previon+. Hosting partner: amazee.io.

Key dates
  • Early bird tickets for CHF 80,- are available until Monday January 14th, 2019

  • Call for sessions and workshops is open until January 21st, 2019

  • Selected program is announced on January 28th, 2019

  • Splash Award submissions is open until February 4th, 2019

  • Regular tickets for CHF 120,- end on February 28th, 2019 after that late bird tickets cost CHF 140,-

  • Drupal Mountain Camp takes place in Davos Switzerland from March 7-10th, 2019

Join us in Davos!

Visit https://drupalmountaincamp.ch or check our promotion slides to find out more about the conference, secure your ticket and join us to create a unique Drupal Mountain Camp 2019 - Open Source on top of the World in Davos, Switzerland March 7-10th, 2019.

Drupal Mountain Camp is brought to you by Drupal Events, the Swiss Drupal Association formed striving to promote and cultivate the Drupal in Switzerland.

Categories: Drupal

The JRPG Startup Cost - by Radek Koncewicz

Gamasutra.com Blogs - 9 January 2019 - 10:05pm
Timing various gameplay elements in JRPGs of the 4th console generation: the Genesis, Sega CD, Super Nintendo, and the Game Boy. Was the random encounter grind really that bad?
Categories: Game Theory & Design

Virtuoso Performance: Drupal file migrations: The s3fs module

Planet Drupal - 9 January 2019 - 11:56am
Drupal file migrations: The s3fs module mikeryan Wednesday, January 9, 2019 - 01:56pm

A recent project gave me the opportunity to familiarize myself with the Drupal 8 version of the S3 File System (s3fs) module (having used the D7 version briefly in the distant past). This module provides an s3:// stream wrapper for files stored in an S3 bucket, allowing them to be used as seamlessly as locally stored public and private files. First we present the migrations and some of the plugins implemented to support import of files stored on S3 - below we will go into some of the challenges we faced.

Our client was already storing video files in an S3 bucket, and it was decided that for the Drupal site we would also store image files there. The client handled bulk uploading of images to an "image" folder within the bucket, using the same (relative) paths as those stored for the images in the legacy database. Thus, for migration we did not need to physically copy files around (the bane of many a media migration!) - we "merely" needed to create the appropriate entities in Drupal pointing at the S3 location of the files.

The following examples are modified from the committed code - to obfuscate the client/project, and to simplify so we focus on the subject at hand.

Image migrations Gallery images

In the legacy database all gallery images were stored in a table named asset_metadata, which is structured very much like Drupal's file_managed table, with the file paths in an asset_path column. The file migration looked like this:

id: acme_image source: plugin: acme process: filename: plugin: callback callable: basename source: asset_path uri: # Construct the S3 URI - see implementation below. plugin: acme_s3_uri source: asset_path # Source data created/last_modified fields are YYYY-MM-DD HH:MM:SS - convert # them to the classic UNIX timestamps Drupal loves. Oh, and they're optional, # so when empty leave them empty and let Drupal set them to the current time. created: - plugin: skip_on_empty source: created method: process - plugin: callback callable: strtotime changed: - plugin: skip_on_empty source: last_modified method: process - plugin: callback callable: strtotime destination: plugin: entity:file

Because we also needed to construct the S3 uris in places besides the acme_s3_uri process plugin, we implemented the construction in a trait which cleans up some inconsistencies and prepends the image location:

trait AcmeMakeS3Uri { /** * Turn a legacy image path into an S3 URI. * * @param string $value * * @return string */ protected function makeS3Uri($value) { // Some have leading tabs. $value = trim($value); // Path fields are inconsistent about leading slashes. $value = ltrim($value, '/'); // Sometimes they contain doubled-up slashes. $value = str_replace('//', '/', $value); return 's3://image/' . $value; } }

So, the process plugin in the image migration above uses the trait to construct the URI, and verifies that the file is actually in S3 - if not, we skip it. See the Challenges and Contributions section below for more on the s3fs_file table.

/** * Turn a legacy image path into an S3 URI. * * @MigrateProcessPlugin( * id = "acme_s3_uri" * ) */ class AcmeS3Uri extends ProcessPluginBase { use AcmeMakeS3Uri; /** * {@inheritdoc} */ public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) { $uri = $this->makeS3Uri($value); // For now, skip any images not cached by s3fs. $s3_uri = \Drupal::database()->select('s3fs_file', 's3') ->fields('s3', ['uri']) ->condition('uri', $uri) ->execute() ->fetchField(); if (!$s3_uri) { throw new MigrateSkipRowException("$uri missing from s3fs_file table"); } return $uri; } }

The above creates the file entities - next, we need to create the media entities that reference the files above via entity reference fields (and add other fields). These media entities are then referenced from content entities.

id: acme_image_media source: plugin: acme process: # For the media "name" property - displayed at /admin/content/media - our # first choice is the image caption, followed by the "event_name" field in # our source table. If necessary, we fall back to the original image path. name: - # Produces an array containing only the non-empty values. plugin: callback callable: array_filter source: - caption - event_name - asset_path - # From the array, pass on the first value as a scalar. plugin: callback callable: current - # Some captions are longer than the name property length. plugin: substr length: 255 # Entity reference to the image - convert the source ID to Drupal's file ID. field_media_image/target_id: plugin: migration_lookup migration: acme_image source: id # Use the name we computed above as the alt text. field_media_image/alt: '@name' # We need to explicitly set the image dimensions in the field's width/height # subfields (more on this below under Challenges and Contributions). Note that in # the process pipeline you can effectively create temporary fields which can be # used later in the pipeline - just be sure they won't conflict with # anything that might be used within the Drupal entity. _uri: plugin: acme_s3_uri source: asset_path _image_dimensions: plugin: acme_image_dimensions source: '@_uri' field_media_image/width: '@_image_dimensions/width' field_media_image/height: '@_image_dimensions/height' caption: caption destination: plugin: entity:media default_bundle: image migration_dependencies: required: - acme_image Other images

The gallery images have their own metadata table - but, there are many other images which are simply stored as paths in content tables (in some cases, there are multiple such path fields in a single table). One might be tempted to deal with these in process plugins in the content migrations - creating the file and media entities on the fly - but that would be, well, ugly. Instead we implemented a drush command, run before our migration tasks, to canonicalize and gather those paths into a single table, which then feeds the acme_image_consolidated and acme_image_media_consolidated migrations (which end up being simpler versions of acme_image and acme_image_media, since "path" is the only available source field).

function drush_acme_migrate_gather_images() { // Key is legacy table name, value is list of image path columns to migrate. $table_fields = [ 'person' => [ 'profile_picture_path', 'left_standing_path', 'right_standing_path', ], 'event' => [ 'feature_image', 'secondary_feature_image', ], 'subevent' => [ 'generated_medium_thumbnail', ], 'news_article' => [ 'thumbnail', ] ]; $legacy_db = Database::getConnection('default', 'migrate'); // Create the table if necessary. if (!$legacy_db->schema()->tableExists('consolidated_image_paths')) { $table = [ 'fields' => [ 'path' => [ 'type' => 'varchar', 'length' => 191, // Longest known path is 170. 'not null' => TRUE, ] ], 'primary key' => ['path'], ]; $legacy_db->schema()->createTable('consolidated_image_paths', $table); drush_print('Created consolidated_image_paths table'); } $max = 0; foreach ($table_fields as $table => $field_list) { drush_print("Gathering paths from $table"); $count = 0; $query = $legacy_db->select($table, 't') ->fields('t', $field_list); foreach ($query->execute() as $row) { // Iterate the image path columns returned in the row. foreach ($row as $path) { if ($path) { $len = strlen($path); if ($len > $max) $max = $len; $path = str_replace('//', '/', $path); $count++; $legacy_db->merge('consolidated_image_paths') ->key('path', $path) ->execute(); } } } // Note we will end up with far fewer rows in the table due to duplication. drush_print("$count paths added from $table"); } drush_print("Maximum path length is $max"); } Video migrations

The legacy database contained a media table referencing videos tagged with three different types - internal, external, and embedded. "Internal" videos were those stored in S3 with a relative path in the internal_url column; "external" videos (most on client-specific domains, but with some Youtube domains as well) had a full URL in the external_url column; and "embedded" videos were with a very few exceptions Youtube videos with the Youtube ID in the embedded_id column. It was decided that we would migrate the internal and Youtube videos, ignoring the rest of the external/embedded videos. Here we focus on the internal (S3-based) videos.

id: acme_video source: plugin: acme_internal_video constants: s3_prefix: s3:// process: _trimmed_url: # Since the callback process plugin only permits a single source value to be # passed to the specified PHP function, we have a custom plugin which enables us # to pass a character list to be trimmed. plugin: acme_trim source: internal_url trim_type: left charlist: / uri: - plugin: concat source: - constants/s3_prefix - '@_trimmed_url' - # Make sure the referenced file actually exists in S3 (does a simple query on # the s3fs_file table, throwing MigrateSkipRowException if missing). plugin: acme_skip_missing_file fid: # This operates much like migrate_plus's entity_lookup, to return an existing # entity ID based on arbitrary properties. The purpose here is if the file URI # is already in file_managed, point the migrate map table to the existing file # entity - otherwise, a new file entity will be created. plugin: acme_load_by_properties entity_type: file properties: uri source: '@uri' default_value: NULL filename: plugin: callback callable: basename source: '@uri' destination: plugin: entity:file

The media entity migration is pretty straightforward:

id: acme_video_media source: plugin: acme_internal_video constants: true: 1 process: status: published name: title caption: caption # The source column media_date is YYYY-MM-DD HH:DD:SS format - the Drupal field is # configured as date-only, so the source value must be truncated to YYYY-MM-DD. date: - plugin: skip_on_empty source: media_date method: process - plugin: substr length: 10 field_media_video/0/target_id: - plugin: migration_lookup migration: acme_video source: id no_stub: true - # If we haven't migrated a file entity, skip this media entity. plugin: skip_on_empty method: row field_media_video/0/display: constants/true field_media_video/0/description: caption destination: plugin: entity:media default_bundle: video migration_dependencies: required: - acme_video

Did I mention that we needed to create a node for each video, linking to related content of other types? Here we go:

id: acme_video_node source: plugin: acme_internal_video constants: text_format: formatted url_prefix: http://www.acme.com/media/ s3_prefix: s3://image/ process: title: title status: published teaser/value: caption teaser/format: constants/text_format length: # Converts HH:MM:SS to integer seconds. Left as an exercise to the reader. plugin: acme_video_length source: duration video: plugin: migration_lookup migration: acme_video_media source: id no_stub: true # Field to preserve the original URL. old_url: plugin: concat source: - constants/url_prefix - url_name _trimmed_thumbnail: plugin: acme_trim trim_type: left charlist: '/' source: thumbnail teaser_image: - plugin: skip_on_empty source: '@_trimmed_thumbnail' method: process - # Form the URI as stored in file_managed. plugin: concat source: - constants/s3_prefix - '@_trimmed_thumbnail' - # Look up the fid. plugin: acme_load_by_properties entity_type: file properties: uri - # Find the media entity referencing that fid. plugin: acme_load_by_properties entity_type: media properties: field_media_image # Note that for each of these entity reference fields, we skipped some content, # so need to make sure stubs aren't created for the missing content. Also note # that the source fields here are populated in a PREPARE_ROW event. related_people: plugin: migration_lookup migration: acme_people source: related_people no_stub: true related_events: plugin: migration_lookup migration: acme_event source: related_events no_stub: true tag_keyword: plugin: migration_lookup migration: acme_keyword source: keyword_ids no_stub: true destination: plugin: entity:node default_bundle: video migration_dependencies: required: - acme_image_media - acme_video_media - acme_people - acme_event - acme_keyword Auditing missing files

A useful thing to know (particularly with the client incrementally populating the S3 bucket with image files) is what files are referenced in the legacy tables but not actually in the bucket. Below is a drush command we threw together to answer that question - it will query each legacy image or video path field we're using, construct the s3:// version of the path, and look it up in the s3fs_file table to see if it exists in S3.

/** * Find files missing from S3. */ function drush_acme_migrate_missing_files() { $legacy_db = Database::getConnection('default', 'migrate'); $drupal_db = Database::getConnection(); $table_fields = [ [ 'table_name' => 'asset_metadata', 'url_column' => 'asset_path', 'date_column' => 'created', ], [ 'table_name' => 'media', 'url_column' => 'internal_url', 'date_column' => 'media_date', ], [ 'table_name' => 'person', 'url_column' => 'profile_picture_path', 'date_column' => 'created', ], // … on to 9 more columns among three more tables... ]; $header = 'uri,legacy_table,legacy_column,date'; drush_print($header); foreach ($table_fields as $table_info) { $missing_count = 0; $total_count = 0; $table_name = $table_info['table_name']; $url_column = $table_info['url_column']; $date_column = $table_info['date_column']; $query = $legacy_db->select($table_name, 't') ->fields('t', [$url_column]) ->isNotNull($url_column) ->condition($url_column, '', '<>'); if ($table_name == 'media') { $query->condition('type', 'INTERNALVIDEO'); } if ($table_name == 'people') { // This table functions much like Drupal's node table. $query->innerJoin('publishable_entity', 'pe', 't.id=pe.id'); $query->fields('pe', [$date_column]); } else { $query->fields('t', [$date_column]); } $query->distinct(); foreach ($query->execute() as $row) { $path = trim($row->$url_column); if ($path) { $total_count++; // Paths are inconsistent about leading slashes. $path = ltrim($path, '/'); // Sometimes they have doubled-up slashes. $path = str_replace('//', '/', $path); if ($table_name == 'media') { $s3_path = 's3://' . $path; } else { $s3_path = 's3://image/' . $path; } $s3 = $drupal_db->select('s3fs_file', 's3') ->fields('s3', ['uri']) ->condition('uri', $s3_path) ->execute() ->fetchField(); if (!$s3) { $output_row = "$s3_path,$table_name,$url_column,{$row->$date_column}"; drush_print($output_row); $missing_count++; } } } drush_log("$missing_count of $total_count files missing in $table_name column $url_column", 'ok'); } } Challenges and contributions

The s3fs module's primary use case is where the configured S3 bucket is used only by the Drupal site, and populated directly by file uploads through Drupal - our project was an outlier in terms of having all files in the S3 bucket first, and in sheer volume. A critical piece of the implementation is the s3fs_file table, which caches metadata for all files in the bucket so Drupal rarely needs to access the bucket itself other than on file upload (since file URIs are converted to direct S3 URLs when rendering, web clients go directly to S3 to fetch files, not through Drupal). In our case, the client had an existing S3 bucket which contained all the video files (and more) used by their legacy site, and to which they bulk uploaded image files directly so we did not need to do this during migration. The module does have an s3fs-refresh-cache command to populate the s3fs_file table from the current bucket contents, but we did have to deal with some issues around the cache table.

Restriction on URI lengths

As soon as we started trying to use drush s3fs-refresh-cache, we ran into the existing issue Getting Exception 'PDOException'SQLSTATE[22001] When Running drush s3fs-refresh-cache - URIs in the bucket longer than the 255-character length of s3fs_file's uri column. The exception aborted the refresh entirely, and because the refresh operation generates a temporary version of the table from scratch, then swaps it for the "live" table, the exception prevented any file metadata from being refreshed if there was one overflowing URI. I submitted a patch implementing the simplest workaround - just generating a message and ignoring overly-long URIs. Discussion continues around an alternate approach, but we used my patch in our project.

Lost primary key

So, once we got the cache refresh to work, we found serious performance problems. We had stumbled on an existing issue, "s3fs_file" table has no primary key. I tracked down the cause - because the uri column is 255 characters long, with InnoDB it cannot be indexed when using a multibyte collation such as utf8_general_ci. And Drupal core has a bug, DatabaseSchema_mysql::createTableSql() can't set table collation, preventing the setting of the utf8_bin collation directly in the table schema. The s3fs module works around that bug when creating the s3fs_file table at install time by altering the collation after table creation - but the cache refresh created a new cache table using only the schema definition and did not pick up the altered collation. Thus, only people like us who used cache refresh would lose the index, and those with more modest bucket sizes might never even notice. My patch to apply the collation (later refined by jansete) was committed to the s3fs module.

Scalability of cache refresh

As the client loaded more and more images into the bucket, drush s3fs-refresh-cache started running out of memory. Our bucket was quite large (1.7 million files at last count), and the refresh function gathered all file metadata in memory before writing it to the database. I submitted a patch to chunk the metadata to the db within the loop, which has been committed to the module.

Image dimensions

Once there were lots of images in S3 to migrate, the image media migrations were running excruciatingly slowly. I quickly guessed and confirmed that they were accessing the files directly from S3, and then (less quickly) stepped through the debugger to find the reason - the image fields needed the image width and height, and since this data wasn't available from the source database to be directly mapped in the migration, it went out and fetched the S3 image to get the dimensions itself. This was, of course, necessary - but given that migrations were being repeatedly run for testing on various environments, there was no reason to do it repeatedly. Thus, we introduced an image dimension cache table to capture the width and height the first time we imported an image, and any subsequent imports of that image only needed to get the cached dimensions.

In the acme_image_media migration above, we use this process plugin which takes the image URI and returns an array with width and height keys populated with the cached values if present, and NULL if the dimensions are not yet cached:

/** * Fetch cached dimensions for an image path (purportedly) in S3. * * @MigrateProcessPlugin( * id = "acme_image_dimensions" * ) */ class AcmeImageDimensions extends ProcessPluginBase { public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) { $dimensions = Database::getConnection('default', 'migrate') ->select('s3fs_image_cache', 's3') ->fields('s3', ['width', 'height']) ->condition('uri', $value) ->execute() ->fetchAssoc(); if (empty($dimensions)) { return ['width' => NULL, 'height' => NULL]; } return $dimensions; } }

If the dimensions were empty, when the media entity was saved Drupal core fetched the image from S3 and the width and height were saved to the image field table. We then caught the migration POST_ROW_SAVE event to cache the dimensions:

class AcmeMigrateSubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { $events[MigrateEvents::POST_ROW_SAVE] = 'import'; return $events; } public function import(MigratePostRowSaveEvent $event) { $row = $event->getRow(); // For image media, if width/height have been freshly obtained, cache them. if (strpos($event->getMigration()->id(), 'image_media') > 0) { // Note that this "temporary variable" was populated in the migration as a // width/height array, using the acme_image_dimensions process plugin. $original_dimensions = $row->getDestinationProperty('_image_dimensions'); // If the dimensions are populated, everything's find and all of this is skipped. if (empty($original_dimensions['width'])) { // Find the media entity ID. $destination_id_values = $event->getDestinationIdValues(); if (is_array($destination_id_values)) { $destination_id = reset($destination_id_values); // For performance, cheat and look directly at the table instead of doing // an entity query. $dimensions = Database::getConnection() ->select('media__field_media_image', 'msi') ->fields('msi', ['field_media_image_width', 'field_media_image_height']) ->condition('entity_id', $destination_id) ->execute() ->fetchAssoc(); // If we have dimensions, cache them. if ($dimensions && !empty($dimensions['field_media_image_width'])) { $uri = $row->getDestinationProperty('_uri'); Database::getConnection('default', 'migrate') ->merge('s3fs_image_cache') ->key('uri', $uri) ->fields([ 'width' => $dimensions['field_media_image_width'], 'height' => $dimensions['field_media_image_height'], ]) ->execute(); } } } } } } Safely testing with the bucket

Another problem with the size of our bucket was that it was too large to economically make and maintain a separate copy to use for development and testing. So, we needed to use the single bucket - but of course, the videos in it were being used in the live site, so it was critical not to mess with them. We decided to use the live bucket with credentials allowing us to read and add files to the bucket, but not delete them - this would permit us to test uploading files through the admin interface, and most importantly from a migration standpoint access the files, but not do any damage. Worst-case scenario would be the inability to clean out test files, but writing a cleanup tool after the fact to clear any extra files out would be simple enough. Between this, and the fact that images were in a separate folder in the bucket (and we weren't doing any uploads of videos, simply migrating references to them), the risk of using the live bucket was felt to be acceptable. At first, though, the client was having trouble finding credentials that worked as we needed. As a short-term workaround, I implemented a configuration option for the s3fs module to disable deletion in the stream wrapper.

Investigating the permissions issues with my own test bucket, trying to add the bare minimum permissions needed for reading and writing objects, I arrived at a point where migration worked as desired, and deletion was prevented - but uploading files to the bucket through Drupal silently failed. There was an existing issue in the s3fs queue but it had not been diagnosed. I finally figured out the cause (Slack comment - "God, the layers of middleware I had to step through to find the precise point of death…") - by default, objects are private when uploaded to S3, and you need to explicitly set public-read in the ACL. Which the s3fs module does - but, to do this requires the PutObjectAcl policy, which I had not set (I've suggested the s3fs validator could detect and warn of this situation). Adding that policy enabled everything to work; once the client applied the necessary policies we were in business…

… for a while. The use of a single bucket became a problem once front-end developers began actively testing with image styles, and we were close enough to launch to enable deletion so image styles could be flushed when changed. The derivatives for S3 images are themselves stored in S3 - and with people generating derivatives in different environments, the s3fs_file table in any given environment (in particular the "live" environment on Pantheon, where the eventual production site was taking shape) became out of sync with the actual contents of S3. In particular, if styles were generated in the live environment then flushed in another environment, the live cache table would still contain entries for the derived styles (thus the site would generate URLs to them) even though they didn't actually exist in S3 - thus, no derived images would render. To address this, we had each environment set the s3fs root_folder option so they would each have their own sandbox - developers could then work on image styles at least with files they uploaded locally for testing, although their environments would not then see the "real" files in the bucket.

We discussed more permanent alternatives and Sean Blommaert put forth some suggestions in the s3fs issue queue - ultimately (after site launch) we found there is an existing (if minimally maintained) module extending stage_file_proxy. I will most certainly work with this module on any future projects using s3fs.

The tl;dr - lessons learned

To summarize the things to keep in mind if planning on using s3fs in your Drupal project:

  1. Install the s3fs_file_proxy_to_s3 module first thing, and make sure all environments have it enabled and configured.
  2. Make sure the credentials you use for your S3 bucket have the PutObjectAcl permission - this is non-obvious but essential if you are to publicly serve files from S3.
  3. Watch your URI lengths - if the s3://… form of the URI is > 255 characters, it won't work (Drupal's file_managed table has a 255-character limit). When using image styles, the effective limit is significantly lower due to folders added to the path.
  4. With image fields which reference images stored in S3, if you don't have width and height to set on the field at entity creation time, you'll want to implement a caching solution similar to the above.
Acknowledgements

Apart from the image style issues, most of the direct development detailed above was mine, but as on any project thoughts were bounced off the team, project managers handled communication with the client, testers provided feedback, etc. Thanks to the whole team, particularly Sean Blommaert (image styles, post feedback), Kevin Thompson and Willy Karam (client communications), and Karoly Negyesi (post feedback).

Tags Migration Drupal Planet Drupal PHP Use the Twitter thread below to comment on this post:

https://t.co/erY3Gvhd97

— Virtuoso Performance (@VirtPerformance) January 9, 2019

 

Categories: Drupal

Drupal blog: Refreshing the Drupal administration UI

Planet Drupal - 9 January 2019 - 11:29am

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

The content administration screen with the new design system.

As you can see on Drupal.org, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

Sample space sizing guidelines from the new design system.

I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on Drupal.org.

Special thanks to Cristina ChumillasSascha EggenbergerRoy ScholtenArchita AroraDennis CohnRicardo MarcelinoBalazs KantorLewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.

How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri EskolaGábor Hojtsy and Jeff Beeman for their help with this post.

File attachments:  drupal-7-vs-drupal-8-administration-ui-1280w.png carlo-content-administration-1280w.png carlo-spacing-1280w.png
Categories: Drupal

Steam now supports social media links on Store pages

Social/Online Games - Gamasutra - 9 January 2019 - 11:23am

Valve has made room for developers to link to their YouTube, Facebook, Twitter, and Twitch accounts on Steam listings for their games. ...

Categories: Game Theory & Design

FFW Agency: It’s time to start planning for Drupal 9

Planet Drupal - 9 January 2019 - 10:24am
It’s time to start planning for Drupal 9 leigh.anderson Wed, 01/09/2019 - 18:24

Drupal 9 is coming. Even if it feels like you only just upgraded to Drupal 8, soon it’ll be time to make the switch to the next version. Fortunately, the shift from Drupal 8 to Drupal 9 should be relatively painless for most organizations. Here’s why.

A little background

Though tools were built in to make the upgrade from Drupal 6 or 7 to Drupal 8 run as smoothly as possible, it could still be a difficult or dramatic process. Drupal 8 marked a major shift for the Drupal world: it introduced major new dependencies, such as Symfony, and a host of new features in Core. The new structure of the software made it tricky to upgrade sites in the first place, which was complicated by the fact that it took a long time for a number of modules to be properly optimized and secured for the new version.

Drupal 9: A natural extension of Drupal 8

Fortunately, the large number of changes made to the Drupal platform in Drupal 8 have made it relatively simple to build, expand, and upgrade for the future. The new software has been designed specifically to make it simple to transition between Drupal 8 and Drupal 9, so that making the migration requires little more work than upgrading between minor version of Drupal 8.

In fact, as Dries Buytaert (the founder and project lead of Drupal) wrote recently in a blog on Drupal.org:

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality. Planning for Drupal 9

As more information is released about the new features and updates in Drupal 9, organizations should consider their digital roadmaps and how the new platform will affect them. And regardless of what your plans are feature-wise, your organization should begin planning to upgrade to Drupal 9 no later than summer of 2021. The reason for that is because the projected end-of-life for the Drupal 8 software is November of 2021, when Symfony 3 (Drupal 8’s largest major dependency) will no longer be supported by its own community.

In the meantime, the best thing your organization can do to prepare for the launch of Drupal 9 is to make sure that you keep your Drupal 8 site fully up to date.

For help planning out your Drupal roadmap, and to make sure that you’ll be ready for a smooth upgrade to Drupal 9 when it releases, contact FFW. We’re here to help you plan out your long-term Drupal strategy and make sure that your team can make the most of your site today, tomorrow, and after Drupal 9 is released.

Comments
Categories: Drupal

InternetDevels: A glimpse at creating layouts in Drupal 8 with the Layout Builder module

Planet Drupal - 9 January 2019 - 8:26am

Everyone loves attractive layouts for web pages. Luckily, Drupal has plenty of awesome page building tools. You will hear such tool names as Panels, Panelizer, Paragraphs, Display Suite, Page Manager, Twig, and more.

Read more
Categories: Drupal

My insight on how level flow is applied in games like Uncharted 4 &amp; The Last of Us - by Trinh Nguyen

Gamasutra.com Blogs - 9 January 2019 - 7:48am
The past few months I have been doing research in the level flow and environment design in games like, "Uncharted 4" and "The Last of Us" This blog will be a crash course on: What is level flow & how level designers apply them.
Categories: Game Theory & Design

Virtual Cities: A Look At Rubacava - by Konstantinos Dimopoulos

Gamasutra.com Blogs - 9 January 2019 - 7:12am
An excerpt from the forthcoming Virtual Cities atlas taking a closer look at Grim Fandango's Rubacava.
Categories: Game Theory & Design

Growing the Conversation on Game Design - by Josh Bycer

Gamasutra.com Blogs - 9 January 2019 - 7:03am
Despite how it has come to define this industry, the study of game design for a lot of people continues to be discounted, and it's time for that to change.
Categories: Game Theory & Design

Netflix&#039;s Bandersnatch UX: Cinema meets Gaming, how to make the marriage last! - by Om Tandon

Gamasutra.com Blogs - 9 January 2019 - 7:02am
Bandersnatch: Netflix's attempt at gamifying cinema is a commendable attempt but leaves more to be desired, ideas on how it can benefit from deep learnings & psychology of games.
Categories: Game Theory & Design

Simple Comment Notify

New Drupal Modules - 9 January 2019 - 6:22am

Coming soon...

Categories: Drupal

Joachim's blog: Getting more than you bargained for: removing a Drupal module with Composer

Planet Drupal - 9 January 2019 - 6:19am

It's no secret that I find Composer a very troublesome piece of software to work with.

I have issues with Composer on two fronts. First, its output is extremely user-unfriendly, such as the long lists of impenetrable statements about dependencies that it produces when it tells you why it can't make a change you request. Second, many Composer commands have unwanted side-effects, and these work against the practice that changes to your codebase should be as simple as possible for the sake of developer sanity, testing, and user acceptance.

I recently discovered that removing packages is one such task where Composer has ideas of its own. A command such as remove drupal/foo will take it on itself to also update some apparently unrelated packages, meaning that you either have to manage the deployment of these updates as part of your uninstallation of a module, or roll up your sleeves and hack into the mess Composer has made of your codebase.

Guess which option I went for.

Step 1: Remove the module you actually want to remove

Let's suppose we want to remove the Drupal module 'foo' from the codebase because we're no longer using it:

$ composer remove drupal/foo

This will have two side effects, one of which you might want, and one of which you definitely don't.

Side effect 1: dependent packages are removed

This is fine, in theory. You probably don't need the modules that are dependencies of foo. Except... Composer knows about dependencies declared in composer.json, which for Drupal modules might be different from the dependencies declared in module info.yml files (if maintainers haven't been careful to ensure they match).

Furthermore, Composer doesn't know about Drupal configuration dependencies. You could have the situation where you installed module Foo, which had a dependency on Bar, so you installed that too. But then you found Bar was quite useful in itself, and you've created content and configuration on your site that depends on Bar. Ideally, at that point, you should have declared Bar explicitly in your project's root composer.json, but most likely, you haven't.

So at this point, you should go through Composer's output of what it's removed, and check your site doesn't have any of the Drupal modules enabled.

I recommend taking the list of Drupal modules that Composer has just told you it's removed in addition to the requested one, and checking its status on your live site:

$ drush pml | ag MODULE

If you find that any modules are still enabled, then revert the changes you've just made with the remove command, and declare the modules in your root composer.json, copying the declaration from the composer.json file of the module you are removing. Then start step 1 again.

Side effect 2: unrelated packages are updated

This is undesirable basically because any package update is something that has to be evaluated and tested before it's deployed. Having that happen as part of a package removal turns what should be a straight-forward task into something complex and unpredictable. It's forcing the developer to handle two operations that should be separate as one.

(It turns out that the maintainers of Composer don't even consider this to be a problem, and as I have unfortunately come to expect, the issue on github is a fine example of bad maintainership (for the nadir, see the issue on the use of JSON as a format for the main composer file) -- dismissing the problems that users explain they have, claiming the problems are by design, and so on.)

So to revert this, you need to pick apart the changes Composer has made, and reverse some of them.

Before you go any further, commit everything that Composer changed with the remove command. In my preferred method of operation, that means all the files, including the modules folder and the vendor folder. I know that Composer recommends you don't do that, but frankly I think trusting Composer not to damage your codebase on a whim is folly: you need to be able to back out of any mess it may make.

Step 2: Repair composer.lock

The composer.lock file is the record of how the packages currently are, so to undo some of the changes Composer made, we undo some of the changes made to this file, then get Composer to update based on the lock.

First, restore version of composer.lock to how it was before you started:

$ git checkout HEAD^ composer.lock

Unstage it. I prefer a GUI for git staging and unstaging operations, but on the command line it's:

$ git reset composer.lock

Your composer lock file now looks as it did before you started.

Use either git add -p or your favourite git GUI to pick out the right bits. Understanding which bits are the 'right bits' takes a bit of mental gymnastics: overall, we want to keep the changes in the last commit that removed packages completely, but we want to discard the changes that upgrade packages.

But here we've got a reverted diff. So in terms of what we have here, we want to discard changes that re-add a package, and stage and commit the changes that downgrade packages.

When you're done staging you should have:

  • the change to the content hash should be unstaged.
  • chunks that are a whole package should be unstaged
  • chunks that change version should be staged (be sure to get all the bits that relate to a package)

Then commit what is staged, and discard the rest.

Then do a git diff of composer.lock against your starting point: you should see only complete package removals.

Step 3: Restore packages with unrelated changes

Finally, do:

$ composer update --lock

This will restore the packages that Composer updated against your will in step 1 to their original state.

If you are committing Composer-managed packages to your repository, commit them now.

As a final sanity check, do a git diff against your starting point, like this:

$ git df --name-status master

You should see mostly deleted files. To verify there's nothing that shouldn't be there in the changed files, do:

$ git df --name-status master | ag '^[^D]'

You should see only composer.json, composer.lock, and the autoloader's files.

PS. If I am wrong and there IS a way to get Compose to remove a package without side-effects, please tell me.

I feel I have exhausted all the options of the remove command:

  • --no-update only changes composer.json, and makes no changes to package files at all. I'm not sure what the point of this is.
  • --no-update-with-dependencies only removes the one package, and doesn't remove any dependencies that are not required anywhere else. This leaves you having to pick through composer.json files yourself and remove dependencies individually, and completely obviates the purpose of a package manager!

Why is something as simple as a package removal turned into a complex operation by Composer? Honestly, I'm baffled. I've tried reasoning with the maintainers, and it's a brick wall.

Tags: Composer
Categories: Drupal

Missing Session Zero

Gnome Stew - 9 January 2019 - 5:00am

Lonely chair in the spot of light on black background at empty room

As highlighted in a recent GnomeCast, Ang, Matt, and I talked about how to approach session zero and handle launching a campaign. Near the end of the recording, Matt pondered about how to handle a missing player. I decided to create an article (this one!) about that very occurrence.

 What does the GM do when a player misses out on session zero? Share1Tweet1+11Reddit1Email

In case you missed the episode, here’s the basic run-down of what session zero is for. Session zero occurs before the campaign launches. It sets the groundwork for genre, system, play style, social contracts, safety measures while at the table, setting agreements (or even creation), character creation, tying the disparate characters together into a cohesive unit via backstory or hooks, and so on. It’s building the foundation the campaign will stand upon for the weeks, months, or years to come. Another thing that I like to do during session zero is to have an introductory encounter (not always combat) between the party and either the world or some NPCs to set the tone and drop some adventure hooks in front of the players.

Now to Matt’s question: What does the GM do when a player misses out on session zero? I have two different answers based on why the player was absent.

Intentional Absence

If the player, for whatever intentional reason, decided to drop out of the session zero experience, I tend to be a little more stern with them. All of the players need to be involved in session zero for it to be as effective as possible. I’ll take the world, setting, city, NPC, party, and introductory hook notes, compile them into a PDF (or several of them if necessary). Then I’ll email the PDF(s) to the player and tell them that the reading is mandatory in order for them to know what is going on with the future of the game.

 If the player, for whatever intentional reason, decided to drop out of the session zero experience, I tend to be a little more stern with them. Share1Tweet1+11Reddit1Email

The reason I make the reading mandatory is that we had a player miss session zero once. This included a high-level lesson from the GM on “string theory” and quantum physics. None of us came out as experts, but it was the foundation behind why our spacecraft had faster-than-light travel built in, and all other spaceships had to use jump-gates. During the course of play during the first session, the player who was absent from receiving this foundational knowledge, really goofed up on the controls of the ship (this was player ignorance, not a bad roll or an in-character moment). We ended up shooting across the galaxy and away from our objectives the GM had carefully planned out. Oops. Yeah, we could have stepped in and retconned the poor decision on the navigation of the ship, but that’s not our game style. We ran with the change in the story arcs, but I felt bad for the GM who had to toss aside sheafs of paper with his meticulous notes.

I’m not a huge planning GM. I do more improv, but there is some planning and prep work that goes into getting ready for the gaming session. Having a player completely disregard the group’s efforts to get together during session zero is inconsiderate and rude, to be honest.

What about the player’s character? When I email the PDF(s) to the player, I give them a narrow scope of character types to pick from to round out the party. Then I tell the player to create the character within those narrow scope of choices, and make sure to show up with a completed character (ready for me to review and approve) when they arrive for the first session.

If possible, I try to keep an open email dialogue going on with the player to see if they can get ideas, their character, etc. to me via email sooner than later. This will give me time to get through the new character and provide feedback before the first session kicks off.

Unplanned Absence

There are times where real life gets in the way of gaming. I completely understand that. Just a few weeks ago, we had two players carpooling to the game after a snow/ice storm. Weather was clear, but the remote roads were not. They ended up in a ditch and against a fence. Even though someone pulled them out and they got to the gaming location, they were no longer in the mood to roll some dice. I get that. I probably wouldn’t want to game either after a harrowing experience like that. (Note: Neither person was injured, so we were very thankful for that.) The litany of ways real life can intercede on our gaming plans is as lengthy as the history of the universe is old.

 There are times where real life gets in the way of gaming. I completely understand that. Share1Tweet1+11Reddit1Email

Be understanding. Be generous. Be kind. Know that the player wanted to be there, but could not because of whatever legitimate reason came up. My approach here is drastically different from the “intentional absence” response.

The first thing I do is see if I can set up some time to meet, one-on-one, with the missing player. If we can get together before the first session, great. During that one-on-one meeting, I’ll outline what’s been decided, ask them if they would have any input and/or changes to make with what has been put down. Then I’ll give them a quick run-down of the characters that already exist in the party, and work with them on making a character that they’ll enjoy but will still fit into the party and mesh well. I’ll also work with them on background information for their character to try and tie their new character to at least two other party members.

Once we finish up the meeting, I’ll reach out the rest of the players with any world/setting changes, so they won’t be surprised. Then I’ll email (on the side) the players who have had the new character’s background “attached” to their own, so they can hash out any further details via email before we sit down at the table again.

If I can’t get a meeting together with the missing player, then I resort to emails. Lots of emails. I’ll compile the same documents into PDF(s) and email those to the player and ask them to read through it before determining their character. I’ll leave the character concepts as wide open as I can, but still with the limitations that the new character fit in with the rest of the party. In other words, if the entire party is made up of rangers, paladins, and cavaliers, I wouldn’t allow the new character to be an assassin… because…. well… That’s just asking for trouble, right?

If things work out well, we’ll have everything nailed down and the player will have a character they like when they show up at the table for the first session.

Conclusion

It sounds like I’m harsh with the “intentional absence” player, but I like to set the tone of expectations early on. If I allow the player to slide early on, I’ve found through experience that they will be problematic throughout the campaign’s run. By nipping it in the bud early on, things run smoother.

I’m also completely understanding of things getting in the way of gaming. It happens. I don’t have to like it, but I get it. I won’t punish a player for missing any session because their work, children, loved ones, car troubles, or just life in general get in the way. As a matter of fact, I’ll go out of my way to work around all of that to see if we can get back on track.

How do all of you out there in the Internet gamer land handle folks who miss key or vital sessions?

Categories: Game Theory & Design

Pages

Subscribe to As If Productions aggregator