Four short links: 12 August 2019

Four short links: 12 August 2019

Four short links
  1. First Person Adventure via Mario Maker (Vice) — the remarkable “3D Maze House (P59-698-55G)” by creator ねぎちん somehow manages to credibly re-create the experience of playing a first-person (!!) adventure game like Wizardy, something Nintendo cleary never intended.
  2. Measurable Counterfactual Local Explanations for Any Classifiergenerates w-counterfactual explanations that state minimum changes necessary to flip a prediction’s classification [and …] builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. Making AI “explain itself” is useful and hard; this seems like an interesting step forward.
  3. Student Evaluation of Teaching Ratings and Student Learning are Not Related (Science Direct) — Students do not learn more from professors with higher student evaluation of teaching (SET) ratings. […] New meta-analyses of multisection studies show that SET ratings are unrelated to student learning. (via Sciblogs)
  4. Apparent Gender-Based Discrimination in the Display of STEM Career Ads — women disproportionately click on job ads, so bidding algorithms charge more to advertisers to show to women, so men see more job ads. (via Ethan Molick)
Article image: Four short links

Blockchain solutions in enterprise

Blockchain solutions in enterprise

Chain link

(source: Pixabay)

Blockchain is a solution for business networks. It makes sense to deploy a blockchain-based solution only where there is a network of collaborating participants who are issuing transactions around a set of common assets in the network. In this article, we’ll identify the initial crucial steps to identifying scenarios for a successful blockchain-based solution, and the first steps toward transforming your business model.

Our first observation of when blockchain is the right solution is that there must be a business network of multiple participants. Our second would be that they require a shared view of assets and their associated transactions.

We then use the following four key blockchain features to further define the benefits of a blockchain-based solution:

Consensus

The process of agreeing on new transactions and distributing them to participants in the network.

Provenance

A complete history of all transactions related to the assets recorded on the blockchain.

Immutability

Once a transaction has been stored on the blockchain, it cannot be edited, deleted, or have transactions inserted before it.

Finality

Once a transaction is committed to the blockchain, it is considered “final” and can no longer be “rolled back” or undone.

There are several other blockchain benefits that underpin these four key benefits, and are worth keeping in mind as you review any potential scenarios:

Identity

All participants in a permissioned blockchain network have an identity in the form of a digital certificate—the same technology that underpins the security and trust when we use a web browser to access our online bank.

Security

Every transaction in the permissioned network is cryptographically signed, which provides authenticity of which participant sent it, nonrepudiation (meaning they can’t deny sending it), and integrity (meaning it hasn’t been changed since it was sent).

Contracts

Smart contracts hold the business logic for transactions and are executed across the network by the participants endorsing a transaction.

These benefits help engender trust between the participants in busi‐ ness networks, and we can use them as a litmus test when checking to see if blockchain is a good technology fit. We should note that while it’s not necessary for a scenario to require every benefit just listed, the more that are required, the more the case is strengthened for using blockchain.

We should always be wary of thinking that blockchain is a panacea for all solutions. There are many reasons why blockchain wouldn’t be a good fit. For example:

  • Blockchain is not suitable if there’s only a single participant in the business network.
  • Although we talk about transactions and world state databases in blockchain, it shouldn’t be thought of as a replacement for traditional database or transaction servers.
  • Blockchain by design is a distributed peer-to-peer network, and is heavily based on cryptography. With this comes a number of nonfunctional requirement considerations. For example, performance and latency won’t match a traditional database or transaction server, but scalability, redundancy, and high availability are built in.

Assets, participants, and transactions

When thinking about a potential blockchain solution and the benefits it brings to the network of participants, it is useful to view it in relation to the following concepts:

  • Assets
  • Participants
  • Transactions

We have already introduced some examples of these. They are core concepts in a blockchain network that benefit from the four primary trust benefits introduced in the previous section.

Assets

Either purely digital, or backed by a physical object, an asset represents something that is recorded on the blockchain. The asset may be shared across the whole network, or can be kept private depending on the requirements. A smart contract defines the asset.

Participants

Participants occupy different levels in a blockchain network. There are those participants who run parts of the network and endorse transactions. Other members may consume services of the network but may rely on and trust other participants to run the network and endorse transactions. Then there are the end users who are interacting with the blockchain network through a user interface. The end user may not even be aware that a blockchain underpins the system.

Transactions

The transactions are coded inside the smart contracts alongside the assets to which the transactions belong. Think of the transactions as the interaction points between the assets and the participants; a participant can create, delete, and update a given asset, assuming they are authorized to do so. It is these transactions that are stored immutably on the blockchain, which also provides the provenance of any changes to the asset over time.

The blockchain fit

First and foremost is to check there is a business network in place. Identify how many suppliers and partners are involved in both the internal and external network. If there is a good business network in place, consider the rest of the blockchain features.

As some of the disputes are related to differences between what was ordered and subsequently received, this can often be the result of different participants in a business network (partners, suppliers, and delivery companies) tracking goods in separate siloed systems.

Therefore, a shared ledger with consensus and finality provided by blockchain across the business network will help to reduce the overall number of disputes as it will give all participants the same information on the assets being tracked.

Furthermore, if changes to the data being tracked either intentionally or unintentionally are part of the root cause of these disputes, then the provenance and immutability features of blockchain could also help.

Last, consider the amount of time taken to resolve these issues. If there are multiple systems (including third-party systems) that someone needs to check in order to resolve any transactions in dispute, having a single shared ledger that is maintained through consensus will help reduce the time taken to resolve them.

Some further observations about how a blockchain-based solution can benefit this business network:

  • Each participant in the business network has an identity and is permissioned in the network. This could help with your processes related to know your customer (KYC) and anti-money laundering (AML).
  • Smart contracts could be designed to resolve some of the disputes automatically by maintaining consistency across the business network and therefore further reducing the number of disputes.

Choosing a first scenario

You may be considering multiple scenarios where blockchain provides a good solution fit. In this case, you will need to compare each to determine which is the best scenario to work on first.

We recommend a simple approach for comparing each scenario using a quadrant chart, where each is placed on the chart based on its relative benefit and simplicity.

In Figure 1, the x-axis is the simplicity of the scenario (simpler to the right) and the y-axis represents the benefit (more beneficial to the top). Place each scenario on the quadrant chart, considering its expected benefit and simplicity as a blockchain solution. This is best done as a group exercise with appropriate stakeholders who can provide the necessary insight to where each scenario falls in the chart based on level of simplicity and potential benefits.

Once all scenarios have been plotted on the chart, it becomes obvious which are the first scenarios to concentrate on—those that will provide the most benefits and are the simplest.

Figure 1. Comparing scenarios based on their benefit and simplicity

Transforming the business network

Once your first blockchain scenario has been identified, you will want to move to the next phase: building the minimal viable product (MVP). An MVP represents the minimum product that can be built to accomplish a goal of the blockchain scenario. Starting an MVP with blockchain shouldn’t be dissimilar to any other technology, and good software engineering practices, such as using Agile principles, will always be applicable. Following are some observations that will help as you start to transform your business with a new blockchain-based solution:

  • Blockchain is a team sport. There will be multiple stakeholders from different organizations in the business network. Some of these organizations may not have traditionally worked directly with one another. Therefore, a clear understanding of the requirements and issues across all participants, and clear lines of communication and agreement, are critical to the success of the project.
  • Use design thinking techniques that focus on the goals for the user to agree on the scope of the MVP.
  • Use agile software engineering best practices, such as continuous integration and stakeholder feedback, to iterate throughout the development of the MVP. Keep stakeholders informed and act on feedback.
  • Start with a small network and grow. There will be some challenges ahead, as this may be a paradigm shift for the business network.
  • If replacing an existing system, consider running the blockchain-based solution as a shadow chain to mitigate risk. By this we mean, during the pilot phase, run the new platform alongside the legacy system. Ideally, you would pass real production data to the new blockchain-based system to test and validate it, while continuing to rely on the legacy system for this phase of the project. Only after thorough testing has been completed and the new system has been proven should you switch from the legacy system to the new.
  • Although blockchain is likely to be a core foundational part of the solution, it probably won’t be the majority. The blockchain network will still integrate with other external systems, providing additional functions such as off-chain data storage, identity access management, Application Programming Interface (API) management and presentation layers, and so on.

Read the full free ebook here.

This post is a collaboration between O’Reilly and IBM. See our statement of editorial independence.

Article image: Chain link

(source: Pixabay).

Four short links: 9 August 2019

Four short links: 9 August 2019

Four short links
  1. Facebook Patents Shadow Banning — which has a long history elsewhere.
  2. Living Off The Land in Linuxlegitimate functions of Unix binaries that can be abused to break out restricted shells, escalate or maintain elevated privileges, transfer files, spawn bind and reverse shells, and facilitate the other post-exploitation tasks. Interesting to see the surprising functionality built into some utilities.
  3. Neural Blind Deconvolution Using Deep Priors — deblurring photos with neural nets. Very cool, and they’ve posted code. (via @roadrunning01)
  4. Warshipping (TechCrunch) — I mail you a package that contains a Wi-Fi sniffer with cellular connection back to me. It ships me your Wi-Fi handshake, I crack it, ship it back, now it joins your network and the game is afoot. (via BoingBoing)
Article image: Four short links

Got speech? These guidelines will help you get started building voice applications

Got speech? These guidelines will help you get started building voice applications

Sound wave circle

(source: Public Domain Pictures.net)

As companies begin to explore AI technologies, three areas in particular are garnering a lot of attention: computer vision, natural language applications, and speech technologies. A recent report from the World Intellectual Patent Office (WIPO) found that together these three areas accounted for a majority of patents related to AI: computer vision (49% of all patents), natural language processing (NLP) (14%), and speech (13%).

Figure 1. A 2019 WIPO Study shows patent publications in a few key areas. Image by Ben Lorica.

Companies are awash with unstructured and semi-structured text, and many organizations already have some experience with NLP and text analytics. While fewer companies have infrastructure for collecting and storing images or video, computer vision is an area that many companies are beginning to explore. The rise of deep learning and other techniques have led to startups commercializing computer vision applications in security and compliance, media and advertising, and content creation.

Companies are also exploring speech and voice applications. Recent progress in natural language and speech models have increased accuracy and opened up new applications. Contact centers, sales and customer support, and personal assistants lead the way as far as enterprise speech applications. Voice search, smart speakers, and digital assistants are increasingly prevalent on the consumer side. While far from perfect, the current generation of speech and voice applications work well enough to drive an explosion in voice applications. An early sign of the potential of speech technologies is the growth of voice-driven searches: Comscore estimates that by 2020 about half of all online searches will use voice; Gartner recommends that companies redesign their websites to support both visual and voice search. Additionally, smart speakers are projected to grow by more than 82% from 2018 to 2019, and by the end of the year, the installed base for such devices will exceed 200 million.

Figure 2. Types of voice interactions. Image source: Yishay Carmiel and Ben Lorica.

Audio content is also exploding, and this new content will need to be searched, mined, and unlocked using speech technologies. For example, according to a recent New York Times article, in the US, “nearly one out of three people listen to at least one podcast every month.” The growth in podcasts isn’t limited to the US: podcasts are growing in other parts of the world, including China.

Voice and conversational applications can be challenging

Unlike text and NLP, or computer vision, where one can pull together simple applications, voice applications—that venture beyond simple voice commands—remain challenging for many organizations. Spoken language tends to be “noisier” than written text. For example, having read many podcast transcripts, we can attest that transcripts from spoken conversations still require a lot of editing. Even if you have access to the best transcription (speech-to-text) technology available, you often end up with a document with sentences that contain pauses, fillers, restarts, interjections (in the case of conversations), and ungrammatical constructs. The transcript may also contain passages that need to be refined due to the possibility that someone is “thinking out loud” or had trouble articulating or formulating specific points. Also, the resulting transcript may not be properly punctuated or capitalized in the right places. Thus, in many applications, post-processing of transcripts will require human editors.

In computer vision (and now in NLP), we are at a stage where data has become at least as important as algorithms. Specifically, pre-trained models have achieved the state-of-the art in several tasks in computer vision and NLP. What about for speech? There are a few reasons why a “one size fits all” speech model hasn’t materialized:

  • There are a variety of acoustic environments and background noises: indoor or outdoor, in a car, in a warehouse, or in a home, etc.
  • Multiple languages (English, Spanish, Mandarin, etc.) may need to be supported, particularly in situations where speakers use (or mix and match) several languages in the course of conversations.
  • The type of application (search, personal assistant, etc.) impacts dialog flow and vocabulary.
  • Depending on the level of sophistication of an application, language models and vocabulary will need to be tuned for specific domains and topics. This is also true for text and natural language applications.

Building voice applications

Challenges notwithstanding, as we noted: there is already considerable excitement surrounding speech technologies and voice applications. We haven’t reached the stage where a general-purpose solution can be used to power a wide variety of voice applications, nor do we have voice-enabled intelligent assistants that can handle multiple domains.

There are, however, good building blocks from which one can assemble interesting voice applications. To assist companies that are exploring speech technologies, we assembled the following guidelines:

  • Narrow your focus. As we noted, “one size fits all” is not possible with the current generation of speech technologies, so it is best to focus on specific tasks, languages, and domains.
  • Understand the goal of the application, then backtrack to the types of techniques that will be needed. If you know the KPIs for your application, this will let you target the language models needed to achieve those metrics for the specific application domain.
  • Experiment with “real data and real scenarios.” If you plan to get started by using off-the-shelf models and services, note that it is important to experiment with “real data and real scenarios.” In many cases, your initial test data will not be representative of how users will interact with the system you hope to deploy.
  • Acquire labeled examples for each specific task. For example, recognizing the word “cat” in English and “cat” in Mandarin will require different models and different labeled data.
  • Develop a data-acquisition strategy to gather appropriate data. Make sure you build a system that can learn as it gathers more data, and an iterative process that fosters ongoing improvement.
  • Users of speech applications are concerned about outcomes. Speech models are only as interesting as the insights that can be derived and the actions that are taken using those insights. For example, if a user asks a smart speaker to play a specific song, the only thing that matters to this user is that it plays that exact song.
Figure 3. Models should be used to derive insights. Image source: Yishay Carmiel and Ben Lorica.
  • Automate workflows. Ideally, the needed lexicon and speech models can be updated without much intervention (from machine learning or speech technology experts).
  • Voice applications are complex end-to-end systems, so optimize when possible. Speech recognition systems alone are comprised of several building blocks which we described in a previous post. Training and retraining models can be expensive. Depending on the application and setup, latency and continuous connectivity can be important considerations.

From NLU to SLU

We are still in the early stages for voice applications in the enterprise. The past 12 months have seen rapid progress in pre-trained natural language models that set records across multiple NLP benchmarks. Developers are beginning to take these language models and tune them for specific domains and applications.

Speech adds another level of complexity—beyond natural language understanding (NLU)—to AI applications. Spoken language understanding (SLU) requires the ability to extract meaning from speech utterances. While SLU is not yet on hand for voice or speech applications, the good news is that one can already build simple, narrowly focused voice applications using existing models. To find the right use cases, companies will need to understand the limitations of current technologies and algorithms.

In the meantime, we’ll proceed in stages. As Alan Nichol noted in a post focused on text-based applications, “Chatbots are just the first step in the journey to achieve true AI assistants and autonomous organizations.” In the same way, today’s voice applications provide a very early glimpse of what is to come.

Related content:

Four short links: 8 August 2019

Four short links: 8 August 2019

Four short links
  1. From The Depths Of Counterfeit Smartphones — security look at the counterfeit phones. Spoiler: they’re nasty, stay away. Both the Galaxy S10 and iPhone 6 counterfeits we assessed contained malware and rootkits. And that’s the most straightforward nastiness: even if you removed the rootkit they’d still be shocking. In the case of the “iPhone,” further digging revealed that it runs a far older version of Android: Kitkat 4.4.0. Kitkat’s last update came in 2014.
  2. Linking Art through Human Poses — arXiv paper that finds artwork with matching poses using OpenPose. (via MIT TR)
  3. A Framework for Content Moderation (Ben Thompson) — pretty good post, tackling why and where the different levels of moderation make sense.
  4. Fully Remote Attack Surface of the iPhone (Google Project Zero) — very interesting read, showing the detail and dead ends of a security tester. The method […] processes incoming MIME messages, and sends them to specific decoders based on the MIME type. Unfortunately, the implementation did this by appending the MIME type string from an incoming message to the string “decode” and calling the resulting method. This meant that an unintended selector could be called, leading to memory corruption.
Article image: Four short links

Four short links: 7 August 2019

Four short links: 7 August 2019

Four short links
  1. Why Checklists Fail (Nature) — After the NHS mandated the WHO checklist, researchers at Imperial College London launched a project to monitor the tool’s use and found that staff were often not using it as they should. In a review of nearly 7,000 surgical procedures performed at five NHS hospitals, they found that the checklist was used in 97% of cases, but was completed only 62% of the time. When the researchers watched a smaller number of procedures in person, they found that practitioners often failed to give the checks their full attention, and read only two-thirds of the items out loud. In slightly more than 40% of cases, at least one team member was absent during the checks; 10% of the time, the lead surgeon was missing. If you give a checklist that ensures X to workers who don’t value X, you get workers who half-arse their way through a checklist. And, in this case, unnecessarily hurt and/or killed patients.
  2. Rowboats and Magic Feathers: Reflections on 13 Years of Museum 2.0 (Nina Simon) — popular social media productions twist the creators’ perceptions and become burdens. I kept to a rigorous schedule and never took a week off. Even weeks when I was giving birth, on vacation, or exhausted from challenges at work, I blogged. My attitude was, “readers don’t care what’s going on with me. They want the content.” This blog became like Dumbo’s feather. I loved it, but I also let it overpower my sense of self. As long as I was holding it—as long as I was pumping out content—I could soar. But I was terrified to let it drop. Without the blog, I presumed I could not fly. Compare Overly-Attached Girlfriend’s video on leaving YouTube. It’s hard stuff.
  3. De-Risking Custom Technology Projects (18F) — sweet advice.
  4. Distinguishing States of Conscious Arousal Using Statistical Complexity — how can you tell whether someone is awake or sedated, just from their brain activity? By analyzing signals from individual electrodes and disregarding spatial correlations, we find that statistical complexity distinguishes between the two states of conscious arousal through temporal correlations alone. In particular, as the degree of temporal correlations increases, the difference in complexity between the wakeful and anaesthetized states becomes larger. Uses an “epsilon machine,” which I’d not heard of before but which is a “minimal, unifilar presentation of a stationary stochastic process” (particular type of hidden Markov model). The entropy of the epsilon machine’s states yields a measure of statistical complexity, which this paper shows maps to sedated/wake states.
Article image: Four short links

Four short links: 6 August 2019

Four short links: 6 August 2019

Four short links
  1. The Path to Traced Movies (Pixar) — Until recently, brute-force path tracing techniques were simply too noisy and slow to be practical for movie production rendering.[…] In this survey, we provide an overview of path tracing and highlight important milestones in its development that have led to it becoming the preferred movie rendering technique today.
  2. Free to Play? Hate, Harassment, and Positive Social Experiences in Online Games (ADL) — The survey found that 88% of adults who play online multiplayer games in the US reported positive social experiences while playing games online. The most common experiences were making friends (51%) and helping other players (50%). […] Seventy-four percent of adults who play online multiplayer games in the US experience some form of harassment while playing games online. Sixty-five percent of players experience some form of severe harassment, including physical threats, stalking, and sustained harassment. Alarmingly, nearly a third of online multiplayer gamers (29%) have been doxed.
  3. Cinematic Scientific Visualization: The Art of Communicating Science — slides and words from SIGGRAPH talk on advanced film-style techniques for telling science stories.
  4. Core Cybersecurity Feature Baseline for Securable IoT Devices: A Starting Point for IoT Device Manufacturers (NIST) — draft of some excellent guidelines to device manufacturers. Device identifiers, firmware updates and resets, data protection, disabling and restricting access to local and network interfaces, event logging, etc. Doesn’t specify how to do these things, just that manufacturers should do them. Important so we don’t build more future botfarms.
Article image: Four short links

Four short links: 5 August 2019

Four short links: 5 August 2019

Four short links
  1. Toolkit of Policies to Promote Innovation (Journal of Economic Perspectives) — We discuss a number of the main innovation policy levers and describe the available evidence on their effectiveness: tax policies to favor research and development, government research grants, policies aimed at increasing the supply of human capital focused on innovation, intellectual property policies, and pro-competitive policies. In the conclusion, we synthesize this evidence into a single-page “toolkit,” in which we rank policies in terms of the quality and implications of the available evidence and the policies’ overall impact from a social cost-benefit perspective. We also score policies in terms of their speed and likely distributional effects. (via Marginal Revolution)
  2. A Brief Tour of Differential Privacy — lecture slides from a CMU course. Content warning: Comic Sans.
  3. Ethically Aligned Design, First Edition — read online. The most comprehensive, crowd-sourced global treatise regarding the ethics of autonomous and intelligent systems available today.
  4. N-Shot Learning — brief overview of machine learning from zero, one, or a handful of examples.
Article image: Four short links

Four short links: 2 August 2019

Four short links: 2 August 2019

Four Short Links
  1. The Evolutionary Roots of Human Decision Making (NCBI) — paper showing that we share cognitive biases with other primates. In one study, monkeys had a choice between one experimenter (the gains experimenter) who started by showing the monkey one piece of apple and sometimes added an extra piece of apple, and a second experimenter (the losses experimenter) who started by showing the monkey two pieces of apple and sometimes removed one. Monkeys showed an overwhelming preference for the gains experimenter over the losses experimenter—even though they received the same payoff from both. In this way, capuchins appear to avoid options that are framed as a loss, just as humans do.
  2. 6 Must Reads for Cutting Through Conflict and Tough Conversations (First Round Capital) — a summary of good (?) advice from books. Some I agree with, but others … having worked for narcissists and bean counters, find a new job. Don’t stay any longer than you have to with those jerks.
  3. ERNIE — Baidu’s open source continual pre-training framework for language understanding. Baidu says: Integrating both phrase information and named entity information enables the model to obtain better language representation compared to BERT. ERNIE is trained on multi-source data and knowledge collected from encyclopedia articles, news, and forum dialogues, which improves its performance in context-based knowledge reasoning. See also the ERNIE paper.
  4. First Programmable Memristor Computer (IEEE) — The new chip combines an array of 5,832 memristors with an OpenRISC processor. 486 specially-designed digital-to-analog converters, 162 analog-to-digital converters, and two mixed-signal interfaces act as translators between the memristors’ analog computations and the main processor.
Article image: Four Short Links

Make data science more useful

Make data science more useful

Labyrinth

(source: Pixabay)

In this episode of the Data Show, I speak with Cassie Kozyrkov, technical director and chief decision scientist at Google Cloud. She describes “decision intelligence” as an interdisciplinary field concerned with all aspects of decision-making, and which combines data science with the behavioral sciences. Most recently she has been focused on developing best practices that can help practitioners make safe, effective use of AI and data. Kozyrkov uses her platform to help data scientists develop skills that will enable them to connect data and AI with their organizations’ core businesses.

We had a great conversation spanning many topics, including:

  • How data science can be more useful

  • The importance of the human side of data

  • The leadership talent shortage in data science

  • Is data science a bubble?

Related resources: