|
|
Subscribe / Log in / New account

A DNS flag day

A DNS flag day

Posted Jan 26, 2019 19:48 UTC (Sat) by biergaizi (subscriber, #92498)
In reply to: A DNS flag day by farnz
Parent article: A DNS flag day

The Harmful Consequences of Postel's Maxim
https://tools.ietf.org/html/draft-thomson-postel-was-wron...

2. The Protocol Decay Hypothesis

[...]

An implementation that reacts to variations in the manner advised by
Postel sets up a feedback cycle:

o Over time, implementations progressively add new code to constrain
how data is transmitted, or to permit variations what is received.

o Errors in implementations, or confusion about semantics can
thereby be masked.

o As a result, errors can become entrenched, forcing other
implementations to be tolerant of those errors.

An entrenched flaw can become a de facto standard. Any
implementation of the protocol is required to replicate the aberrant
behavior, or it is not interoperable. This is both a consequence of
applying Postel's advice, and a product of a natural reluctance to
avoid fatal error conditions. This is colloquially referred to as
being "bug for bug compatible".

3. The Long Term Costs

Once deviations become entrenched, there is little that can be done
to rectify the situation.

For widely used protocols, the massive scale of the Internet makes
large scale interoperability testing infeasible for all a privileged
few. Without good maintenance, new implementations can be restricted
to niche uses, where the prolems arising from interoperability issues
can be more closely managed.

[...]

Protocol maintenance can help by carefully documenting divergence and
recommending limits on what is both acceptable and interoperable.
The time-consuming process of documenting the actual protocol -
rather than the protocol as it was originally conceived - can restore
the ability to create and maintain interoperable implementations.

Such a process was undertaken for HTTP/1.1 [RFC7230]. This this
effort took more than 6 years, it has been successful in documenting
protocol variations and describing what has over time become a far
more complex protocol.

4. A New Design Principle

The following principle applies not just to the implementation of a
protocol, but to the design and specification of the protocol.

Protocol designs and implementations should be maximally strict.

Though less pithy than Postel's formulation, this principle is based
on the lessons of protocol deployment. The principle is also based
on valuing early feedback, a practice central to modern engineering
discipline.

4.1. Fail Fast and Hard

Protocols need to include error reporting mechanisms that ensure
errors are surfaced in a visible and expedient fashion.

4.2. Implementations Are Ultimately Responsible

Implementers are encouraged to expose errors immediately and
prominently in addition to what a specification mandates.

4.3. Protocol Maintenance is Important

Protocol designers are strongly encouraged to continue to maintain
and evolve protocols beyond their initial inception and definition.
If protocol implementations are less tolerant of variation, protocol
maintenance becomes critical.


to post comments

A DNS flag day

Posted Jan 26, 2019 23:29 UTC (Sat) by marcH (subscriber, #57642) [Link] (4 responses)

> 4.1. Fail Fast and Hard
> Protocols need to include error reporting mechanisms that ensure
> errors are surfaced in a visible and expedient fashion.

Hi, firewalls!

Is there any big company where the IT department can be made accountable for severe losses of productivity? Curious whether they have any job offers right now.

A DNS flag day

Posted Jan 27, 2019 10:30 UTC (Sun) by mpr22 (subscriber, #60784) [Link]

Define "made accountable".

Ideally, in a way that still makes people want to work in that IT department.

A DNS flag day

Posted Jan 29, 2019 23:51 UTC (Tue) by intgr (subscriber, #39733) [Link] (2 responses)

> Hi, firewalls!

This!

ICMP provides a perfectly good mechanism to report back forbidden packets. But for some odd reason it's considered best practice to instead blackhole disallowed packets.

In more than one case, a missing firewall rule and the blackhole approach together turned a simple mistake into a cascading failure of multiple systems waiting for timeouts.

A DNS flag day

Posted Feb 5, 2019 15:22 UTC (Tue) by JFlorian (guest, #49650) [Link] (1 responses)

My understanding of this is that it's all about information disclosure. In other words, it's best practice to fail hard with ICMP forbidden on internal facing connections, but to silently drop on external ones. Is there a strong argument that ICMP forbidden in both directions doesn't really present any additional risk? I certainly get the advantages (I've been caught out by my own firewall rules more times than I can count), but the disadvantages can be of the type you don't know until you've been burned.

A DNS flag day

Posted Feb 5, 2019 16:45 UTC (Tue) by nybble41 (subscriber, #55106) [Link]

Assuming they already know your assigned IP range, does always responding to incoming connections with ICMP forbidden (including for unknown internal IPs) really leak significantly more information than silently dropping the packets?


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds