A presence on the main social media sites is now standard for brands, but the social sites were not originally designed for marketing and hence there are many possible pitfalls. As the sites have evolved and their applications for business have become clearer, new tools and facilities have been added to control and moderate content that is not posted by the business itself.
The importance of monitoring what is said is easily demonstrated – barely a week goes by without news of an internet publicity problem for a business.These issues seem mostly to arise when some ill-advised advertising ties in with a newsworthy event. In the days when complaints came by letter or telephone, very few would notice – but now that customer backlash can appear online, bad publicity can go viral in minutes.
It is clearly now more important than ever to ‘engage brain before opening mouth’ when making statements on behalf of a brand. However, the open nature of social media means that brands leave themselves open to adverse comments, spam attacks and other problems if there is no control over what is posted, whether or not the brand has caused the problem itself. So what facilities are available on the main 토토사이트 to keep content under control, without stifling debate and feedback?
Facebook provides some basic content moderation facilities for fan pages. These include a two-level profanity filter which should filter out the most undesirable spam posts which can hit all sites. The site also offers the ability to restrict the countries in which the page can be seen – either allowing a list of countries, or blocking a list. Finally there is a simple moderation setting – this is either ‘show all posts by default’ or ‘hide all posts by default’, the latter pending approval by the page admins. While a popular fan page may create a lot of work for the moderator, leaving a page with all posts showing can be fraught with risk. At the very least, the profanity filter should be enabled.
The main way to provide content moderation on YouTube is to create a channel for the brand. This allows the brand to have its own look and feel, to control what videos are posted and to control the comments that are made. For the last, comments can be pre-moderated – this means that they will not appear until approved – or reactively moderated, in which case theywill appear by default.This should be treated with caution as inappropriate comments will only be noticed if flagged.
The final major site to consider is Twitter. Due to its ease of use and limited-length format, Twitter can be where news spreads fastest. There are also ways in which Twitter itself can allow unwanted results. For example, projecting a Twitter feed live at an event is now a common occurrence, allowing attendees to send their thoughts to everyone at the event, providing a talking point and entertainment. However it is all too easy for this facility to become memorable for all the wrong reasons, especially if a large display board is used, or if the hashtag was released before the event and a campaign has been hijacked.
It has become apparent that at least a basic level of content moderation is essential for these feeds – un-moderated feeds are open to abuse, profanity, ‘off-message’ postings or worse.Because Twitter has an ‘open’ programming interface,developers have provided add-ons, utilities and tools in abundance. There are several free tools now available for moderating Twitter feeds, as well as paid-for tools with wider capability.