Product team (source: iStockPhoto)

5 tactics of an effective agritech product manager

Why are relatively few agritech products achieving adoption at scale when billions of dollars are being invested internationally every year? New start-ups appear almost weekly. And established companies are shifting from small innovations around the edges to major projects that sit at the heart of business plans.

Yet for all this activity, few product ideas seemingly “make it”.

We work with many agricultural companies – start-ups and established organisations – to help them develop smart digital products and services. In our experience, there’s a strong correlation between a business’s approach to product management, and their success in developing meaningful products or services that get used. The choice of product manager, the scope and objectives of their role, and their level of skill and authority, drives the success (or otherwise) of the product or service development.

The CEO or a Project Sponsor may define the overall business outcomes and vision, and project managers may be concerned with product budget and timeline, but the product manager is at once the “voice of the product” to the business, and simultaneously translates the “voice of the customer” to the development team (this part of the role is also called product owner). They decide the detailed problems the team will try to solve, the relative priority of those problems, and when the solution is complete enough to be put into the hands of customers.

Here are five tactics an agritech product manager can use to be more effective in their role:

1.      Allocate your attention

A product manager juggles many tasks. They must understand scope, be able to prioritise effectively, understand how the team is delivering and what is planned. It’s incredibly hard to do this if the product only gets a small time-slice of your attention.

You won’t be able to effectively manage your product by turning up for a fortnightly sprint planning session. Product managers need to be able to spend time with both stakeholders and the development team. They participate in customer interviews, review the product in showcases put on by the development team, and are deeply involved in product planning workshops.

2.      Build stakeholder relationships

New products and services are built at the intersection of customer or user desirability, business viability, and technical feasibility:

  • Does this product solve a real problem for its users, and can they readily get the benefits?
  • Are customers willing to pay for a solution, and is this solution sufficiently “better” that they will switch?
  • Does this product or service meet the objectives and fit the strategic direction of the company?
  • Is there a business model that makes sense for the company and which could be profitable?
  • Can it be built to operate as envisioned, at a cost the company can justify?

Effective product managers really understand the needs of their users and customers – their behaviours and the problems the product is trying to solve. They use observation and interviews to inform their opinions and seek data from the existing tools or products that customers use.

Product managers must also build trust with the business, effectively communicating how the direction and priorities chosen for the product meet the objectives of the business.

3.      Discover, don’t assume

It’s very tempting to build technology products and services the way corporate computer systems were developed in the past: envisage a solution, document it as a set of requirements, and set the development team to work. When the developers and testers are done, roll out the solution (or pass it through to sales and marketing).

Effective product managers know that detailed customer needs are emergent, and so too must be the solution to those needs. They make use of product discovery activities: carrying out interviews, running experiments, and building prototypes. They know that testing an idea by building a prototype and validating that thinking with real users may not only be an order of magnitude quicker and less expensive than building software, it avoids the huge waste of developing robust, performant, tested software that does not address the real problem.

Software and hardware will still need to be built, but continuously using discovery activities to understand and address customer problems reduces the risk of building a great solution to the wrong problems.

4.      Prioritise value

If customer and user problems and the solutions are emergent, how can you effectively manage the work of a development team (or teams)? How do you decide what gets released to customers, and when?

Effective product managers decide what discovery and development tasks are the highest priority to work on at any point in time. They pay great attention to the “product backlog” – the set of problems waiting to be worked on, ideas waiting to be tested, and validated ideas waiting to be turned into production software. They may visualise these using story maps, or as items on a Kanban board.

This is not “project management”, seeking to most efficiently have all the tasks completed on time and within budget. Rather, the product manager is making value-based decisions about which tasks or stories (feature sets) are the most valuable to do now, and which can be deferred (and might never be done if sufficient value can be delivered to customers and the business without them).

Product managers consider value on multiple scales:

  • Which stories deliver the most value to customers and end users?
  • Which stories help the company achieve its objectives (revenue, customer acquisition, or other outcomes)?
  • Which activities must be prioritised for the product development process to be successful (for instance, prioritising a discovery or validation activity that may change the overall shape of the product)?
  • Which essential dependencies must be built for more valuable stories to work?

5.      Release early and often

This may be an agile mantra, but it remains valid. It’s tempting to hold off putting your product into customers hands until it is “complete”. This is especially the case for established companies who worry about reputational risk.

Delaying until the product or service is largely “complete” misses the opportunity to learn how customers choose to use your product or service. They may pay no attention to that wonderful feature you slaved over and be thrilled by other functions. You may discover that the value you expected just isn’t there, and that you need to “pivot” to a different approach. Far better to do this early than wait until the entire budget is spent.

For organisations worried about reputational risk, limited pilots are a useful tool, whether with a subset of staff or a small group of customers. Early adopters may not fully represent your entire eventual market, but carefully chosen they can provide learning and become advocates to your broader market.

Learn More

We use a variety of tools and techniques to support Product Managers in their role, including discovery techniques and activities, dual-track agile (a team working on discovery and a team developing prioritised stories), and flexible scope contracts that focus on value delivery in a time frame rather than a fixed set of requirements. Talk to us to learn more.

If you’re a product manager in agriculture or agritech, consider attending the Ag Innovations Bootcamp – get inspiration and hands on “how-to” experience for product development best practices.

 

Read about the Ag Innovations Bootcamp

Sounds like DEFRA’s been listening

Back in March I posted an article on LinkedIn arguing the case for future farm support to be channelled into technology solutions that can deliver productivity gains and better deliver of social and environmental goods.
 
Well it seems the UK government is listening. Its publication of the Agriculture Bill last month which will determine farming support for a post-Brexit UK (noting that there will be differences in devolved administrations), caught my eye on three counts:
 
  • The phasing out of direct support payment
  • The introduction of funding for farmer-led R&D and collaboration on productivity innovation
  • A new Environmental Land Management (ELM) scheme
 
Of course the devil is in the detail, but on first glance (and at odds with some farming leaders) I like the look of what’s being proposed. Here’s why:
 
First, phasing out of direct support finally puts an end to the subsidy crutch that for too long has made British farming unproductive. We lag hopelessly behind many of our major competitors on this metric and while transitioning to a brave new world won’t be easy, it is vital to give farming the boot up the backside to become more innovative by necessity.
 
The fact that there may no longer be a requirement to farm to receive progressively reduced payments over seven years is a good thing. It gives farmers wishing to exit a dignified means of doing so, and might even start to make land occupation (rents or purchase) a little more reflective of economic viability – a good thing for innovating farmers and new entrants alike.
 
Second – and the one which in many ways I am most excited about – is the directing of funds towards farmer-led R&D and innovation. This is potentially game changing and totally in tune with a more technology-driven future for the sector. 
 
Back in March I noted the announcement of the Innovate UK Transforming Food Production fund of £90m as being a welcome start, but really just a drop in the ocean. I really hope the government through the Bill is bold enough to provide significant budget into this farmer-led area and not just pay it lip service.  There are some exciting initiatives we are involved in that fall squarely into what the government is driving at here. But this funding MUST encourage innovation that is focused on food production as well as other areas. As I wrote in the spring, more food is needed in the next 50 years than has been consumed in the entire history of humanity! It’s a big challenge that needs big thinking.
 
Third is the ELM scheme. For me there is also a huge technology role here. Delivery of public goods has to be measurable and we are now in the era of big (and small) data, machine learning and AI that could deliver real transformation in ways that can transparently demonstrate public value. The taxpayer should expect nothing less.
 
Moving away from direct support and into the territory of funding innovation and targeted activity is a sea change and something I believe to be a good thing. Ultimately, this approach is about the development of solutions which should, over time, stand on their own two feet. That’s what we are focused on and why so many of our clients come to us asking the question: “How will digital and data help us do the job better?” 
 
So, yes I understand why farm leaders are concerned. But this is not the time to cling onto the past. It is absolutely the time to tear up the rule book, imagine what the future should look like, and back truly innovative thinking and innovative farmers to get us there.  
Looking at sheep performance

Ways smarter tech can accelerate livestock genetic progress

I started writing about the case for smarter tech in cattle and sheep breeding programmes a while ago, and quickly realised there was more than would fit in a single blog post. In my previous post I described how technology can help us increase the pace of genetic progress, by:

  • Measuring what we can’t easily see, including a range of important traits such as efficiency, emissions and disease resistance; and
  • Reducing errors in recording and transcription.

There are two further areas where smart technology can and is helping:

Cut collection costs

How much does it cost to collect a phenotype for an animal?

A phenotype is the observable characteristics of an individual that result from both its genotype and the environment. When you collect sufficient phenotypic information about many individuals, you can adjust for environmental effects, and predict the genetic merit of that animals. Collecting phenotypes involves measuring details such as calving ease, weight gains, progeny survival and other characteristics of economic value to farmers and consumers.

Current technology for DNA analysis tell us more about the genetic similarity and differences between animals (their genetic relatedness) than their outright performance. There are relatively few simple gene interactions you can measure with a DNA test.

What DNA testing helps with is the challenge of recording which animals are related to each other, especially as pedigree recording gets harder with larger sets of animals. For the DNA test to be useful, it must be able to be statistically connected to animals that have had phenotypes measured.

As Dr Mike Coffey of SRUC, wrote in 2011, “In the age of the Genotype, Phenotype is king!

Phenotyping will continue to be required. Livestock breeders will need a pool of accurate and comprehensive phenotype observations to support both new and existing breeding objectives, connected to the growing pool of genotyped animals.

Measuring animals is time-consuming and expensive: the tedious task of observing and measuring characteristics of animals, linking these to the correct animal records, and transcribing the observations into computer systems.

With the need for ongoing, and potentially more detailed phenotype recording, technology is our friend. From automated weighing and electronic identification to cameras, sensors, and deep learning, properly configured and managed technologies reduce the cost of data collection – especially for larger operations and at commercial scale.

We’ve helped several organisations organise and automate their phenotype recording, integrating electronic identification and a range of other technologies, and streaming that data back to central databases.

Better buying behaviour

What really drives buying decisions?

Ultimately livestock breeding is driven by demand.  Livestock breeders focus their selection efforts on traits that drive economic performance such as:

  • Fertility;
  • Birth weight and survival;
  • Feed conversion efficiency and growth rates or milk production;
  • Carcass characteristics or milk characteristics;
  • Docility, resistance or resilience to disease;

and more.  In the livestock industry, this is presented as the predicted benefit to future generations (typically termed EPDs or EBVs depending on your species, breed, and country). The EBVs or EPDs are combined into economic indexes where each is given a $ or £ weighting based on its impact in a typical farm system and usually delivered as tabular reports and graphs to buyers.

And what do buyers select on?

  • In our experience, New Zealand dairy farmers make great use of the primary economic index – Breeding Worth or BW, combining that with decisions about breed and gestation length.
  • In the sheep industry, breed and breeder decisions are often the primary decider. Once a breeder is selected, the numbers are used to buy the best animals one can afford.
  • In the NZ and Australian beef industry, an expert tells me that for animals sold at auction, the closest purchase price correlation is with liveweight: in their opinion, animals are often valued on how big they are.

When sheep and beef farmers are presented with multiple EBVs and indexes, it can be overwhelming, and it is not surprising some farmers revert to assessing a ram visually, negating the value of genetics programmes.

I’m a great fan of Daniel Kahneman’s Thinking Fast, Thinking Slow, where he points out the challenge we have with quantifying difficult or unknown measures.

“If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. I call the operation of answering one question in place of another substitution.”  – Daniel Kahneman, Thinking Fast, Thinking Slow.

If understanding EBV’s and indexes seems difficult, you’re most likely to substitute an easier question – how good the ram looks or how knowledgeable the breeder seems – without even realising you have done so.

How can technology help here? We can use technology to make ourselves smarter – or the questions simpler. For instance, there’s potential to create a data-driven system that analyses your current farm system and recommends which economic index or other measure you should use when choosing animals. Or perhaps a tool that with a few simple selections works out how much you should spend on rams or bulls, finds those at a sale that meet your needs, and returns a ranked short list that you can use in your final decisions.

Smarter technology for livestock buyers is a key area where the industry can make real progress on the rate of genetic gain, and a strong understanding of how people make decisions will be critical to its success.

4 reasons why bespoke software development may be the right choice

Is bespoke development always the expensive option?

It is hard to think of a business today that does not rely on software to improve efficiency, compliance or reaching customers. As businesses grow the opportunity to encapsulate functions and processes within an information system increases. Indeed, your software may define your business success.

One of the first questions our customers ask is ‘what off the shelf solutions are there?’. Often there are candidates and the next question is ‘will this off the shelf solution meet my needs and provide the cheapest option?’

Assessing potential candidates is not easy – in reality, its only once you are using the software that you can evaluate if it really fits your requirements. Making the wrong choice causes disruption, demoralises staff and makes you look bad. Assessing candidate software is not just ticking features off the checklist. It is about how the software will help grow your business – how the software fits the way you do business, how useable it is, how it will evolve to meet your changing needs and in so doing create great solutions.

There are times when it is also worth considering custom or “bespoke” software developed precisely for your needs, rather than what is available off the shelf.

Let’s look at four compelling reasons for bespoke software

  1. The value of any software can be measured by uptake; either by your customers or internally. While customer uptake is easier to measure, a low internal uptake may just seem like staff being lazy, or they seem to need a lot of training – it may just be that the software is hard to use. The greatest chance you have for achieving a solution that exactly meets your needs is by building it that way.

Workflow – Your software should reflect the efficient way you wish to manage business. As the creator of Visual Basic, Alan Cooper described, software should take away the pain. The worst case is an off the shelf solution simply does not fit with your workflow – and forces your business to needlessly change instead.

Functionality – Build only what you need and make the experience satisfying to users – great software should be a delight to use. Feature laden off the shelf solutions add no value if these features are not used. In fact, they reduce the value as users navigate through a mess of user interface that is irrelevant. As a subscriber of an off the shelf solution you are paying for all features even if this cost is spread over many more subscribers.

  1. Software innovation has proven to turn problems into unique solutions.

Software not only has the potential to increase the efficiency of your current business processes it can analyse and solve challenges and connect people so that problems are now unique solutions for you and your customers. Indeed, today businesses have innovated with bespoke software to create solutions that are the reason for their phenomenal growth. They haven’t achieved this with off the shelf software their moderately performing competitors are using.

  1. Your business needs and opportunities should drive your development roadmap.

In our software development business, we use agile processes and lean development to ensure you are buying functionality that has the highest priority for your business growth. You are in control. Providers of off the shelf products must meet the needs of a diverse user group. If you can see the providers roadmap you may be surprised just how long you may wait for your features; at worst your highest priority requirements may not even be on the roadmap. In such case you will need the provider to undertake bespoke development. Often they will do this when it is convenient and there is little incentive for their price to be competitive.

  1. The very process of planning and undertaking bespoke development can have wider benefits in reviewing your processes – leading to innovation in the way you do business

When we develop software, we use Design Thinking to understand the business goal and innovate with you – not just with the software but also the interaction with people, processes, hardware and data. As a result, your unique world view and people can unleash transformations well beyond the software.

This is not an exhaustive list of benefits in bespoke development but focusses on four areas often unaccounted or undervalued.  In summary bespoke development can be the best option where uptake is essential, where innovation can lead to business growth and where a business wants a competitive solution.

The case for smarter tech in cattle and sheep breeding

My first job in livestock performance recording was with the Genetics Section, as it was called, at Ruakura Research Centre in New Zealand. I worked part time while studying at university, transferring research trial data off the government mainframe on reel-to-reel tape, and writing inbreeding coefficient calculation software.

The genetics section was based in an old converted house, where we sat around at large, wooden, public service desks, surrounded by high stacks of computer printouts, all painstakingly bound and labelled for future use. We were the leading edge of genetic improvement and livestock performance recording.

That was nearly thirty years ago of course, and the face and capability of modern technology has radically changed. Interestingly however, many of the practices in livestock recording industries still reflect that past golden age, and it is only recently that the software tools and databases of – let’s be generous and say – 15 years ago have started to be refreshed.

In this, the first of two articles about technology in livestock breeding, I propose that we could make much more effective use of smart technologies to increase the rate of genetic progress and address commercially important, but hard to measure, animal characteristics. In my next post, I’ll examine how technology could reduce the cost of phenotype collection (I might even explain what a phenotype is), and encourage better use of improved genetics by commercial producers.

Measure what you can’t see

In our traditional performance breeding tools, we focused on things that farmers could readily measure: kilograms and counts. Numbers of live progeny, and kilograms of liveweight, milk, and wool. Good news, most of those production traits are heritable and we’ve made good progress over the last 30+ years.

So how do you measure characteristics that are important in modern farming systems?

  • Meat eating quality, so that consumers can repeatably have a great eating experience;
  • Feed conversion efficiency, converting inputs into product more efficiently, reducing greenhouse gas emissions per unit of product, and making the farming system more profitable;
  • For that matter, greenhouse gas emissions (where this is driven by livestock genetics rather than inoculation by a specific set of gut microorganisms);
  • Urine nitrate concentration, and hence one key environmental impact of extensive livestock farming;
  • Disease resistance and the response of animals to a variety of disease and parasite challenges;
  • Behaviour of animals around people and other livestock, including how they handle stressful environments such as being moved; and
  • Longevity, the ability of female animals to raise progeny season after season, reducing the substantial cost of replacement animals.

There are proxies for many of these measures of course. Breeding for growth rates or milk production have arguably improved greenhouse gas efficiency for example, but in some breeding systems a change in mature weight of animals has increased emissions. Progeny tests and laboratory measures have been used in key programmes, but they may not help us with routinely identifying the genetic outliers that will lead the next leap in genetic progress.

New measurement and sensing technologies offer real potential to help with these “hard to measure” areas of animal performance in the coming years. Accelerometer and microphone technologies can identify individual animal eating habits, heats and parturition (birth) dates. 3D and multispectral cameras tell us about carcass and meat product characteristics, and additional characteristics of milk. Increasingly, this data will be collected in-line or in near-real-time, providing a rich stream of data that could be analysed for many purposes.

The next generation of animal recording and genetic analysis systems must be built to handle this variety of real-time, stream data: or at least the results of analysing it.

Fewer errors, more progress

A primary driver of any livestock recording and animal evaluation system is to enable breeders and commercial producers to make better decisions about the animals they use in breeding. Computers don’t select animals: people do. Where a producer chooses an animal because they like the look of its eyes, or its stance, or its colour, and ignores the potential impact of the animal on their herd, the results will be at best random, and often detrimental.

Formal breeding schemes with EBVs and indexes seek to inform better decisions about the breeding merit of animals, but EBVs can be limited by the information available:

  • Accuracy of recording parentage and animal relationships;
  • Incorrect allocation of records to the wrong animals;
  • Transposition and recording errors when capturing data; and
  • Failing to account for the impact of environmental effects such as the feeding and management regimes of groups of animals, the age of the mother, or whether an animal was reared as a single or twin.

Technology is playing a substantial role in improving the accuracy of EBVs, notably through genomic DNA analyses resolving the fraught process of parentage recording and contributing substantially more information, earlier in each animals’ life-cycle. Better facilitation and handling of genomic data collection is well overdue in animal recording systems, and I’m pleased to see this being addressed.

In addition to genomics, electronic identification (EID) and automated recording systems can remove many identification and data capture areas, and the ability to feed this data seamlessly into modern evaluation systems without having to manually manipulate data will provide another leap forward.

Recording management groups properly has been a real limiting factor in many breeding programmes, and is one of the key hesitations in extending these to commercial producers. I believe that sensors that identify eating and movement behaviours, and location or proximity to other animals, will help us to automatically and transparently solve the problem of recording management groups and regimes. This will provide another substantial step forward in removing the noise of environmental effects.

Of course, more accurate EBVs is still only a piece of the puzzle. Helping producers to make use of this information effectively is another, and something I’ll address in my next post.

 

Rezare Systems is a bespoke software design and development company specialising in the agriculture sector. We have special expertise in building livestock recording and management systems, and tools for data collection and integration. Learn how Rezare Systems can assist your business.

More than one way to skin the data sharing cat

Last week the UK Agriculture and Horticulture Development Board (AHDB) announced an industry consultation to develop a set of principles (code) to promote the sharing of farm data. Happily, we at Rezare UK have been awarded the contract to run this project based on our unique agridata expertise and our significant experience in developing a code in NZ.

 

Improving the flow of data from farms to other organisations is seen (rightly) by the AHDB as part of the productivity agenda for UK agriculture, but there remain significant barriers to getting the data flowing in practice mainly because of issues around trust and interoperability of disparate sets of data.

 

While the code will go someway towards addressing issues of trust (and start to build some alignment across industry on best practice when it comes to sharing and using farm data), other issues will also need to be addressed going forward beyond the code itself, particularly the more technical aspects of exchanging and using the data.

 

Two really good examples of dealing with this have emerged in the past couple of years – DataLinker in NZ and Agrimetrics in the UK. These two approaches (the latter is one of four UK government agritech centres of excellence) while quite different in nature (and to a degree in objectives), are actually also potentially very complimentary.

 

DataLinker works on a model where no one party becomes the single repository and broker of farm data. Instead, data owners build APIs to standardised schema and do this once only so that permissioned third parties can access that data in a known way. The exchange of data between the owner and user of it (“consumer”) is a bilateral relationship where DataLinker provides the permissioning (tokens) and legal frameworks (templated agreements) to streamline and standardise the process.

 

DataLinker assumes that each potential system is in fact its own “locker” (store of data) with one or more types of data. Users of some sort already interact with those systems, so what DataLinker does is standardise the way of finding which systems have which types of data (the findable F in FAIR data sharing) and in which formats (the interoperable I in FAIR). It specifies the method by which organisations agree data access rules and users provide permission (together, the accessible A of FAIR), with the result that the data is reusable (the R in FAIR). DataLinker has been focused more on the farmer or user-facing sharing of data than for broad data access necessary for researchers for example (at least without organisations explicitly addressing this).

 

Agrimetrics in the UK employs the semantic web whereby publicly available data (published on the web) or private data made available under a licence agreement is organised according to a Resource Description Framework (RDF). Each data entity is described as a “triple” (subject-predicate-object) and in that way stored data becomes machine readable by being linked to other data entities. The data contributed is effectively “held” by Agrimetrics and then exposed through APIs (charged or free) under licence for third parties to use.

 

Agrimetrics is focused on big data and using semantic web is tagging or structuring large datasets in public HTML documents (and other data) in a way that makes it machine recognisable and readable.

 

In essence the two approaches can be differentiated thus:

  • DataLinker is a network approach – a set of protocols and standards that allow myriad parties to exchange and share data in multiple bilateral (albeit mostly templated) arrangements through standardised APIs.
  • Agrimetrics is a hub approach – where data is is shared to the Agrimetrics “centre” where it is stored, manipulated and interpreted before being shared as a more user-friendly asset under licence through APIs.

 

In many ways Agrimetrics is the more comprehensive since it seeks not only to broker data exchange but also to add value to the data by linking it and manipulating it to meet a particular consumer’s need. It can handle structured or unstructured data. This is potentially very powerful as it allows a consumer of the data to draw on Agrimetrics’ technical know-how and capacity to do increasingly clever and machine-learning based activities with the data. In other words, Agrimetrics can offer a one-stop-shop for brokering and adding value to data.

 

However, there are also problems with the approach. It assumes a high degree of integrity and legal rigour being exercised by Agrimetrics since the data sharers are effectively “letting go” of their data to be stored and used by an organisation that is looking to commercialise it. And in the absence of private data holders being prepared to release data, Agrimetrics is only as good as the publicly available (web published) data.

 

DataLinker does not (and is not intended to) become involved in negotiating commercial deals to share data. Nor does it become involved in managing, manipulating or interpreting the data.  It is largely a hand-off approach designed to facilitate the network, not control it. But the adoption of the standardised schemas means there is an IT burden on the data sharers – either in-house or outsourced – to build compliant APIs.
Understanding the DataLinker and Agrimetrics approaches

So is one approach likely to prevail? Most likely not and it’s actually preferable for the two to co-exist and complement each other. Here’s why:

  • First, because culturally the DataLinker approach is more aligned to putting the interests of the farmer first and right now farmer trust in how their data is controlled and used is becoming almost the biggest blocker to progress
  • Second, because it is unlikely industry will want to have all its eggs in the one basket
  • Third, because the horsepower in Agrimetrics is potentially a game changer in terms of releasing real innovation based on farm data and thus demonstrating the value proposition to farmers from sharing their data (another piece of the sharing jigsaw that is missing)
  • Fourth, because the DataLinker approach through its JSON_LD APIs means data can be “readied” for consumption in a semantic way which would complement the success of Agrimetrics
  • And fifth, because the semantic web is likely to be a long-term approach favoured particularly by the research community within the agrifood sector.

 

There are other concepts for farm data sharing that are being considered around the globe.

 

For example, Wageningen University in the Netherlands has proposed a Farm Data Train which effectively creates a number of data lockers (stores), all with the same API and approach to authorisation, which means their interfaces in effect align closely to what is proposed in DataLinker. At present this concept is focused more on plant breeding data but it could easily grow outwards.

 

So what’s my point? Well, as can be seen, there is more than one way to skin the proverbial cat. What’s important is for the sector to provide space for the approaches to breathe so that there is increased opportunity for innovation to deliver against the productivity agenda. That’ll need some collaboration and collaborative thinking and in the UK we shall, in the coming months, discover how its agri sector wants to address these issues.

 

It’s a great time to be involved in agridata and better still that Rezare are in the thick of shaping the future.

Photo credit: Vadim_Key (iStock)

How DataLinker streamlines agricultural technology connections

Information is the life-blood of today’s businesses and will enable the transformations occurring in the agriculture and food business sector. Historically, information has only been exchanged between businesses at the transactional level (such as shipping notices and invoices), while richer data that could differentiate products, demonstrate environmental compliance, and optimise business value has remained isolated in silos.

DataLinker is designed to give agricultural businesses (farmers, processors, input suppliers and advisers) the ability to access and combine data from multiple sources in flexible, and timely ways, without requiring many hours of skilled technical resource to carry out data exports and imports.

Integrating and effectively sharing data looms large for many businesses, so companies are investing in their own development and infrastructure, and are also finding the challenges: data standardisation, supporting different interfaces for each partner organisation, and time taken to negotiate data access agreements. DataLinker addresses these issues.

What is DataLinker?

DataLinker is a framework for agriculture and food businesses who wish to interchange data. In many ways it is analogous to the GS1 frameworks used to interchange shipping notice and invoice data, or the Ag Gateway framework used in grain supply space. DataLinker’s primary focus was to allow farmers to bring data from a variety of sources into the tools they use for decision making, but it can be equally beneficial to all companies in the sector.

DataLinker consists of four major components that work together:

  • Data exchange specifications (“schemas”) that standardise sets of data using the Farm Data Standards and modern internet protocols (developed collaboratively with the input of member companies);
  • A small central registry where companies can discover which organisations implement each specification and how these are accessed;
  • Standardised contract terms that can be used to reduce negotiating time and legal costs in the majority of data interchanges; and
  • Technical tools to support secure agreement of data access terms, approval of access, and (where necessary) farmer permission for individual farm data sets.

DataLinker is not a database, nor a central communications hub through which all data might pass.

All of the DataLinker specifications and framework components are based on internet standards, and companies are responsible for implementing the specifications in their own IT systems, although support is provided.

How does the commercial model work?

DataLinker Limited has been incorporated as a separate entity to operate the DataLinker registry and support the collaborative development of standardised API specifications for areas where its users direct. The board of directors comprises representatives from Beef+Lamb NZ, DairyNZ, MPI, and an independent chair appointed by DataLinker’s subscribing members. DataLinker Limited operates effectively as a not-for-profit to encourage adoption and benefits for the agricultural community.

There are no transaction fees.

Members pay a joining fee of $6,000 NZD (waived for New Zealand organisations prior to 31 May 2018), and an annual subscription of either $3,500 (for organisations either providing or consuming data), or $4,500 (for heavier users both providing and consuming data). These fees are analogous to membership of a standards organisation, supporting the operation of the registry and collaborative maintenance of specifications.

Family eating dinner

Are you making authentic supply chain promises?

If you’re in the food business (whether that’s retail, food service, processing, farming, or supply), consumers are asking questions about your supply chain.

Of course, they may not be asking you directly, and they may not be asking your retail or food service partner, but they are asking: on social media, on recommendation sites such as TripAdvisor and Yelp, even over drinks at their local.

Are you providing the information they need to be confident about the quality and safety of your product? Do you have a substantiated story around provenance, animal welfare and the environment?

Safeguards such as DNA testing lasagna are “bottom of the cliff” activities, an attempt to rebuild broken trust and arguably too limited and late in the supply chain.

Future product preference and even acceptance relies upon a supply chain that can show ethical practices: in how environmental impacts are managed, natural biodiversity is encouraged, animal welfare is maintained, anti-microbial resistance is avoided, and workers and communities are treated.

Activist groups and the power of social media means that our response to these demands must be much more solid than a promise or a declaration form. We must have the systems and measures to back up our words – and to demonstrate as much to auditors and our supply-chain partners.

For those of us at the confluence of technology and agriculture, this means we must do more than just record activities and calculate gross margins. We must step up with tools that capture rich data in support of farming activities, and which actively encourage good decisions that improve both profitability and sustainability.

All this needs to be done with minimal additional effort by farmers and their staff, and aligned to real-world processes on farm.

I’ll be speaking at MobileTech 2017, the annual summit for technology innovations in the primary sector, reflecting on these challenges. I’ll summarise some of the work Rezare Systems is doing in this space, and suggest ways the industry could apply technology to the opportunity.

This article was first published at www.rezare.com/blog  

Dairy farmer with technology

Farmers love technology, fear misuse

Increasing numbers of farmers see technology as useful and important to their farming businesses, and farmers are looking to invest further in new technology over the coming years. Despite this, lingering concerns about data sharing, privacy and control remain.

According to the October 2016 Commonwealth Bank of Australia Agri-Insights Survey of 1600 Australian farmers, 70% of farmers believe that the digital technology available adds significant value to their businesses.

The Ag Data Survey published by the American Farm Bureau Federation (AFBF) also found that farmers are optimistic about technology, with 77% of farmers planning to invest in new technology for their farms in the next three years.

Farmers also see value in sharing and re-use of data, but privacy and control are the largest barriers to more widespread re-use.

The Agri-Insights Survey found that:

  • 76% of farmers think that there is value in sharing on-farm production information with others;
  • 58% of farmers currently share some on-farm production information with others; and
  • Of farmers who don’t see value in data sharing, “privacy concerns” at 28% is the largest reason.

The New Zealand Office of the Privacy Commissioner surveyed New Zealanders about privacy and their attitudes to data sharing in April 2016. They noted that:

  • 57% of respondents were open to sharing data if they could choose to opt out;
  • 59% were open to sharing if there were strict controls on who could access data and how it was used; and
  • 61% were open to sharing if the data was anonymised and they couldn’t be personally identified.

The US AFBF survey also highlighted some of these concerns in an agricultural context:

  • Only 33% of farmers had signed contracts with their ag-tech provider. Another 39% knew of their provider’s policies but had not signed anything;
  • When farmers were asked if they were aware of the ways in which an ag-tech provider might use their data, 78% of farmers answered “no”; and
  • 77% of farmers were concerned about which entities can access their farm data and whether it could be used for regulatory purposes.

Not just farmers

Confidentiality and control can be barriers to companies too. After all, much of the data is about their activities, products, or equipment as well as the farm itself.

It’s not always clear how other parties will behave when sharing data. Organisations generally make reasonable and effective use of data and meet confidentiality expectations, but there is always a risk that they won’t. So companies sharing data are forced to negotiate “iron-clad” agreements, keeping the corporate lawyers busy and making any new data exchange the subject of long-winded negotiations.

As soon as you get into negotiations like this, costs rise. If one of the parties is a smaller player with less negotiating power (company or farmer), they may never be able to conclude a useful data access deal. The end result? A slower rate of innovation, the benefits of information to the farmer and overall supply chain are not fully realised, and sharing data becomes a much more expensive exercise than you would otherwise expect.

Over the years, industry players have experimented with different ways to address these issues. Centralised industry-good databases and exchanges have been proposed, and these could be very effective. Unfortunately, concern about centralising large amounts of data, and the loss of control that this brings has led players to hold back some or all of their data from such repositories.

Other groups have posited that all data should be in the exclusive control of the farmer, and have built exchanges or created open API standards on that basis. We applaud this, but it doesn’t always reflect the significant effort that companies and service providers invest in creating and curating some data sets. The end result is that some data sets are often held back from such exchanges.

A collaborative approach

The New Zealand primary industry has worked on several approaches to this problem in a collaboration between the red meat sector, the dairy sector, and the Ministry for Primary industries.

The Farm Data Code of Practice is designed to encourage greater transparency between farmers and service providers or vendors about the data that is held, and the rights that each party has to the data. A straight-forward accreditation process gives farmers confidence that organisations have “got their house in order” when it comes to terms and conditions and data policies.

The DataLinker protocol builds on the standardised, open API approach to sharing data, but with three key considerations:

  • It provides a way for organisations to agree a Data Access Agreement without a protracted legal negotiation. Standard agreements are provided and encouraged, to reduce the overhead that all parties face in legal costs and time (that said, custom agreements are still possible where absolutely necessary).
  • Accepting a Data Access Agreement doesn’t give the recipient “open slather” to the data; for most data sets, explicit farmer approval is also required, requested and confirmed by the farmer using standard web authorisation protocols. Farmers grant permission to access data that covers their business, and can also withdraw that authorisation.
  • As an Open API approach is used rather than a central database or exchange, there is no “central service” that must be involved in each data transfer. This reduces the “attack surface” from a security perspective and enables organisations to retain control of the data they hold.

Organisations adopting the DataLinker protocols benefit in several ways:

  • Farmers see that they are playing their part in maximising the use of information;
  • Standardised APIs and Data Access Agreements reduce the time and money invested in negotiating and creating custom solutions for every interaction;
  • Data Access Agreements mean that companies still retain the necessary control over high-value data sets, and are able to meet the privacy and confidentiality terms they have agreed with farmers; and
  • Companies and farmers can efficiently use sets of data which otherwise might have been too expensive to collect, or required a level of farmer input which would have discouraged adoption.

Our hope is that this framework will help organisations and farmers to maximise use of farm information, reducing long-term costs and encouraging greater innovation.

What you can do about this:

  • Want to see which New Zealand companies are accredited under the Farm Data Code of Practice? Check out www.farmdatacode.org.nz and drop an email to your key information providers to find out when they will be accredited.
  • Interested in the DataLinker protocols and how they can be adopted by your business? You’ll find information at www.datalinker.org.
  • Planning your strategy in this data space, or considering next steps? Talk to us – we’re happy to provide you with background and advice.

How on-farm data and analysis can support credence attributes

Can on-farm technologies and “big data” support food and fibre product attributes that consumers value?

In a previous article I noted a Hartman Group study that suggested that consumers are interested in attributes other than just the look and price of a product, wanting to know:

  • What ingredients are in the food or beverage product (64%);
  • How a company treats animals used in its products (44%); and
  • From where a company sources its ingredients (43%).

We call these informational aspects of a product “credence attributes”, meaning that they give credence to our decision to purchase (or not purchase) a product or service, but can’t be directly assessed from the product itself, either before purchase (on the basis of colour or feel) or after purchase (on the basis of taste, for instance).

Characteristics such as “organic”, “environmentally responsible”, “grass-fed”, and “naturally raised” relate to the story behind a product. A product may communicate these through advertising, packaging, and other ways of telling the product story.

But consumers are also looking for authenticity and integrity in their food and other products. There’s a consumer backlash when the product story on the pack is in conflict with other data sources – such as claims in news articles or secret video footage.

We’ve been exploring ways that feeds of data from on-farm technology could be used to support the product provenance and credence story – or at least signal to farmers and their supply chain partners where checks and improvements should be considered. Here are a couple of examples.

Monitoring carbon footprint

Carbon life-cycle assessments (LCAs) are used to understand the extent to which production, manufacture, and distribution of a product impacts on climate change through deforestation or release of greenhouse gases such as carbon dioxide, methane, and nitrous oxide. We learn some interesting things from these, sometimes showing that shipping food products from the other side of the world can have a lower impact than growing products locally if the local environment is less hospitable.

Importantly, producing a Life-cycle assessment creates a model – a series of equations and if-then logic that describes the calculation. We can use this model with appropriate local farm and supply chain data to understand how management decisions and activities, timing and stock or crop productivity impact on emissions.

Automated systems on farms that capture data about crop production, livestock weights and production, and farm activities can also deliver data for a custom life-cycle assessment. Benchmark data across multiple farms and it becomes possible to identify the patterns of complete vs missing data, to understand how climatic constraints change emissions, or to identify outliers that need to be more closely examined.

A note of caution here: as we’ve learned from nutrient budgeting, farm systems can be varied and life-cycle assessment models are frequently based on the “typical”. An outlier result may indicate greater variation than the model can handle, rather than a more or less efficient farming system.

Demonstrating animal welfare

Animal welfare and the ability to live a healthy and natural life is another area of concern to consumers. Here too, metrics collected on-farm can be the subject of automated analysis to demonstrate good practices are followed.

In Europe where a premium is payable for “grass-fed” dairy in some regions, farmers are experimenting with the use of monitoring devices – smart tags and neck bands for example. These devices capture data that provide farmers with early warning of heats and potential animal health issues – raised temperatures, more or less movement, and reduced eating for example – but can also be analysed for patterns that only show up in outdoor grazing.

In other jurisdictions, veterinary product purchase, use, and reordering records can help to demonstrate compliance with animal health plans worked out between farmers and veterinarians, and hence demonstrate good welfare practices and appropriate use of medicines. Paper records have been used for this purpose for many years, but software technologies and automated data analysis can reduce the burden of data collection and the need for manual audits and analysis.

Practical application

Some producers will find the thought of such automated systems invasive and potentially threatening. Certainly, given the potential for outliers, for good practices that just don’t quite fit the expected mould, and for technology glitch or human error, you couldn’t use these measures as legal baselines that determine “rights to farm”.

Nevertheless, application of technology and analytics such as these can help us as we seek to improve farming practice and improve the integrity of our food supply chains. A good starting point might be to apply these as tools for committed producer groups that are already aligned with supply of a premium product or market.

 

This article was first published at http://www.rezare.co.nz/blog/.
Contact us to learn how

we apply software and models to agricultural data.

Page 1 of 512345

Creating agritech products and services?
Don’t miss Ag Innovations Bootcamp 5-6 December