Ben Mitchell's typo blog charting the excitement, activities and challenges of my 12 months' studying the MA in Typeface Design at Reading University.

Now with occasional ramblings about type-related things I find interesting.

Opinions are all my own.

Recently, we were visited by Will Hill, ex-Reading student and now Senior Lecturer in Graphic Design at Anglia Ruskin University. His lecture touched upon something that’s been bothering me for some time…

From printing’s beginnings, type has taken its cues from inscriptional lettering, handwriting and calligraphy. Over the next 500 years, type started to diverge from hand-tooled forms, becoming slowly emancipated from these external sources, and becoming more standardised; new typographic environments and developments in technology both fuelled and fed off the evolving spectrum of typeforms.

But until the end of the 20th century, type designers were still constrained to using the traditional technologies of production: drawing letter patterns by hand, cutting punches and casting metal type. With the advent of digital type drawing, those technologies are slowly being left behind, with many type designers nowadays drawing letters, unmediated by paper, directly on screen.

In The Stroke, Gerrit Noordzij reduces typeforms to handwritten strokes:  letter shapes are unavoidably composed of the strokes of our pen or pencil. The stroke is the unassailable basis (‘fundamental artefact’) of a shape. For Noordzij, outlines do not define a shape, they are simply the bounds of a shaped stroke. Unfortunately, this is only one way of seeing things, and it relies on drawing letters from the inside, as though tracking the ductus with a tool. It is not clear how his theory could apply to computer-generated outlines not conceived with penstrokes in mind.

However, Noordzij is right that most of what we read is based on models of how we write. Adobe’s Robert Slimbach states “It makes sense that type designers look to the established archetypes for inspiration…Because the familiar, traditional form — which grew out of centuries of handwriting practice — remains embedded in readers’ minds, it is crucial that designers of text typefaces work within its bounds.” (Quote from the Arno Pro specimen.)

But let’s step back and think about this: why should what we read and what we write be related? After all, the physiology of the eye and that of the hand do not in any way imply a logical connection. Are the letterforms that come out of our hands when we write the best possible forms for reading?

Some people seem to think so. So-called ‘infant’ typefaces with the single-storey /ɑ/ and /ɡ/ are very popular among children’s book publishers. But perhaps these publishers have conflated reading and writing. Studies have shown that children do not find ‘adult’ versions of these letters especially problematic, and understand that one version is for reading, the other for writing. (Sue Walker, 2003). Adults generally don’t find variant forms problematic (though some people prefer their handwriting to use typographical forms of the /a/ and /g/). And letters in other scripts often have differences between handwriting and type. Doesn’t this imply the connection between reading and writing is not as causal as we tend to think?

So here’s the question: type is not writing. So why has the influence of writing persisted for so long in type design?

Will Hill cast an interesting light over the matter in his lecture. He sees the stroke-and-tool paradigm as a model that ensures coherence in type design. It provides a set of ‘relational constraints’ or a ‘behaviour pattern’ that makes all the letters in a design belong to each other. Our firmly entrenched and largely unquestioned conservatism in following the stroke-and-tool model acts as a kind of safety net that gives us a set of design parameters that ensure consistency in our typeface.

If that’s the case, and with technology now at a stage where designers can work directly on screen, one would now expect there to be a quiet revolution in the way we think about type, and new models should have the chance to spring up.

Jeremy Tankard’s new Fenland typeface shows that this is indeed the case. Instead of basing Fenland’s ‘relational constraints’ on the stroke paradigm, the letters are formed by bending hypothetical steel tubes. In direct contradiction to Noordzij’s theory, Tankard abandons a stroke model and begins his drawings with outlines. The curves bend around the letterforms instead of following the shape of some internal ‘skeleton’. The curves really do unexpected things, collapsing in on themselves as they go around corners and throwing away the conventions of where thick and thin strokes appear.

Which brings us to a second reason why the stroke paradigm persists. All the questions the type designer needs to ask in designing letters can be answered by considering the stroke model, what tool is used and what logic is being applied to that stroke. Therefore, it is a paradigm that sets out sufficient parameters for designing type. Additionally, as Noordzij shows us, the model provides enough variability for different forms to emerge: expansion, translation, running and interrupted constructions can be freely combined to different degrees, generating a huge spectrum of possibilities.

Much as Tankard’s tubular premise is fascinating and original, it isn’t quite sufficient to provide all the answers to how the letters should look. For example, he has had to also define a particular ‘stroke’ order,  which strokes are primary, and whether they connect in a ‘running’ or ‘interrupted’ way: the tube model itself says nothing about these matters, and the answers have to be decided on a letter-by-letter basis. This doesn’t promote the consistency that the stroke paradigm is so good at ensuring. The skill in Fenland is in Tankard’s ability to reconcile the letters consistently without a sufficiently explicit behaviour pattern.

In my Mint typeface, started in 2009, I began to see the outlines as primary, rather than the strokes. Although the strokes are still very much apparent, conceiving things this way allowed some fresh thinking. The outlines alternate between shaping the black letterforms and locking in the white counterspaces. The interplay between black and white (similar to the Japanese design concept of ‘notan’) gives the white page a more active role in the typography of the text block, in a way the stroke model wouldn’t naturally elicit. But again here, the ‘outline’ model doesn’t provide exhaustive parameters to ensure consistency.



The MATDs have now submitted their typefaces (woo!) and are moving on to the next projects, but it’s definitely time to experiment with these questions and see what alternative models can offer.

Posted at 1:07pm and tagged with: typography, stroke, Noordzij, type design, handwriting, construction, type, reading, writing, design, Fenland, stroke model,.

David Březina () came to visit us last week, to talk through his career in type design and his award-winning, multi-script foundry, Rosetta, to critique our typefaces, and to ask us an impossible question. What he wanted to know was how we plan to create original work in our typeface design careers over the next ten years. A ten-year plan is not something I’d naturally sit down and think about, so it certainly struck me as an intriguing question. How on earth can I set about planning my long-term creativity? It was the kind of meta question that demands you take several steps back from the process itself and consider how one approaches one’s approach.

David suggested one way to respond to this question might be to map the design space in which to plot typefaces, and use this to identify areas that have not yet been exploited. Maps have always seemed useful, so I started to sketch out how I personally categorise designs. It turns out that I judge typefaces based on two axes, which seem to run from functional/sober to artistic/characterful and from humanist/calligraphic to constructed/experimental.

However, I quickly realised that there are two aspects to a typeface: its form and its styling. These aspects may need to be categorised separately — for example Gill Sans Shadowed has rather restrained and conventional forms, but more eccentric, trendy styling. This may mean typefaces need to be classified twice, once according to their form, and once for their styling.

I plotted a few typefaces to see if the map would work:

This sort of thing is hugely subjective, but could be useful in talking to clients, especially if illustrated with example typefaces. I suspect it could be useful in finding contrasting typefaces that work together nicely.

From this map, I wondered if everybody isn’t trying to achieve the same goals in type design: the design space in the middle of the chart should be some sort of sweet spot where ‘perfect’ tension arises through the interplay of conventionality and playful creativity. Nobody generally wants a bland or cold typeface, but neither do they want a wacky, overstated thing that won’t stop shouting. Therefore the best way to create original work is to avoid the crowded space where everything blends together. One option might be to think about balance rather than blending. Somehow the idea of yin and yang popped into my head, where the black contains a spot of white and the white has a spot of black. Why not let’s try and apply this to design? Instead of blending the opposites, draw on them both but keep their characteristics distinct. I’m sure some interesting possibilities lie that way.

There could be some other approaches that promote originality. Originality seems to stem from individuals creating work that is truly personal. FontLab’s bezier wrangling interface results in certain kinds of curves, but sketching with pencil and paper produces shapes of a different quality. So it follows that using a range of different tools (and I include different software in my definition of ‘tools’) will result in more personal outputs.

It seems also to make sense to study a range of different typefaces to see how others have solved certain problems, and broaden our repertoire of what constitutes ‘acceptable’ or ‘conventional’; also, to plot new areas on the map. Reading about type allows deeper, theoretical or historical concepts to inform our choices.

Lastly, typefaces solve problems, so seeking new problems is very likely to lead to original ideas.

Following David’s stay, we were delighted to welcome Reading alumnus Paul Barnes (@paulobarnesi) from independent foundry Commercial Type to talk about his approach to type. Paul emphasised the way originality can be grounded in a sensitive appraisal of historical sources. His main interest lies in 19th century British typefaces in the ilk of Baskerville, but his expertise also includes European influences going back to the 17th century. He finds original ideas evolve, interestingly, from being faithful to traditional letterforms, perhaps treating them in new ways stylistically. For example, his typeface developed for the National Trust took traditional English letterforms from the 17th century, converted it to a sans-serif design and applied Optima-style modulation:

Paul’s typeface experiment, Marian epitomises this approach: he took a selection of typefaces that represent different historical eras, and wondered what they would look like if stripped down to their barest form. He rigorously consulted thousands of sources to develop a very well rounded judgment of the typefaces’ inherent characteristics, and then drew their strokes in the thinnest hairlines. The result is an unexpectedly elegant family of display faces and I’m looking forward to seeing how graphic designers treat and use it.

Originality in typeface design, then, is personal to each of us, so we shouldn’t aim to be prescriptive. It is somehow linked to inspiration, and to a full understanding of historic context and precedents. It can be offering a new take on a well-loved model, or it can be driven by a synthetic exploration of concepts. It’s been a fascinating start to our final term, and the meta-thinking will serve as a continual, quiet reminder to produce better informed work.

With thanks to David and Paul for their generosity and encouragement.

Posted at 12:07am and tagged with: originality, typeface, design, type design, MATD, Reading, Paul Barnes, David Brezina,.

Our Spring term has flown by, and progress on my typeface was honestly a bit disappointing. Perhaps I tried to tackle too many things and ended up spreading things a bit thin with unresolved attempts at Greek and Thai, or perhaps it was the packed timetable of workshops, visiting lecturers and assessment deadlines, but I was expecting to have achieved more by the end of term. I was especially unhappy that I didn’t have very much new stuff to show Gerard in his two visits of the term, as I’d been focussing on the non-Latin designs rather than bold, italic or sans fonts I’m also trying to develop. On the plus side, however, my Latin lowercase in the regular weight is now accomplished, including most of the spacing, so I’m freezing that now to work on the caps and Burmese.

One of the problems with designing Burmese type had been nagging me since the start: Burmese script seriously challenges a type designer because there are ostensibly very few things you can do with a circle: make them circular, or make them circular?  At the end of the day, a circle is still a circle. Referring back to my brief, those words ‘active, fluid, lively and cheerful’ seemed incommensurate with drawing a circle. Whoever heard of a lively circle? And with letterforms so completely removed from the Latin, what could be translated across to harmonise the scripts? The interesting lesson was learning to see beyond these limitations, think about how those adjectives could be implemented in different ways, and design at a higher level. I’ll try to explain how.

Joins

In this sample from the Universal Declaration of Human Rights, using a font called Myanmar2, we can see that a good proportion of the circles in Burmese are connected (and some of the connections have not been well implemented). My first response was to look at how I’d made the junctions in my Latin letters and transpose this onto the Burmese shapes. This meant substantial thinning at the junctions to brighten the joins. At the same time, I’d responded to the requirement of ‘fluid’ by smoothing the joins into one continuous stroke.

Unfortunately, this had unwanted side-effects. The stress in the second circle has now moved around to the right side of the shape, and more importantly, the shape now bears no relation to the way it is drawn.

This image from John Okell’s indispensible Burmese: an introduction to the script shows clearly the consonant Ta is composed of two strokes, with the tool lifted off the page between strokes. Although my typeface is not strongly calligraphic, it seemed unwise to contradict the stroke construction only so that it would seem fluid. It also seemed the continuous construction didn’t give enough definition to the joins. In addition, the vowel sign Aa needs to connect to differently shaped consonants as a distinct mark, so having different joins just looked inconsistent.

It was at this stage I realised the strokes and components of Burmese needed to overlap each other rather than join in a Latin way. I also remembered Fiona saying that Indic scripts tend not to thin much at the joins.

The result above seems much more assured and less contrived.

Wraparounds and verticals

These highlighted parts in digital fonts always seemed so out of character for such a round script, and my original intention was to make them much less intrusive by ironing out the straight lines and sharp corners. My first attempts looked too clumsy, with inconsistent stress and shaky verticals. By the time I created my most recent version (third line below), I’d realised the problem with other fonts was not the verticality of the forms, but their squareness and sharp corners.

What about those adjectives then?

Yes. Active, fluid, lively and cheerful. As mentioned above, the simple way didn’t work out. Lightening the joins and making the strokes continuous ended up with a style that contradicted all the evidence. Instead I chose to lighten the interiors of the circles by taking weight off the inside strokes (resulting in a new way to avoid the problem of too much monolinearity and creating a pleasant balance of thick and thin strokes). I also brought in more energy and bounce to the leaf shapes by making their counters much more open. (Image above shows two letters with the leaf shapes.)

Posted at 5:19pm and tagged with: Burmese, letters, font, type design, typeface, glyphs, non-Latin, MATD, Reading,.

The Adobe Font Development Kit for OpenType (AFDKO or simply FDK) is a set of command line tools that Adobe makes freely available for font developers to help with production and testing. If, like me, you’ve struggled with FontLab’s glitches only to end up with incompatible font names, duplicate encodings or extra features you didn’t write, FDK seems like a better way to do things. Unlike glyph-editing software like FontLab, Fontographer, Glyphs or DTL BezierMaster, the FDK’s strength is in directly wrangling your fonts’ behind-the-scenes properties, such as naming and compiling extensive font families (especially from multiple masters), scripting the OpenType code that FontLab can’t manage, and comparing stem widths to expedite hinting.

Miguel Sousa, Adobe’s Type Team Lead, (and alumnus of the MATD programme), visited last week to teach us how to use the FDK, and even came to the department on Saturday to explain pretty much everything there is to know (or at least everything we wanted to know!) about multiple masters and hinting.

The FDK model is based on CFF/PostScript outlines, which are compiled with a set of text files into an OpenType font file. The text files set out all the font’s properties (font tables) like naming, style linking, hinting, glyph positioning (GPOS) and glyph substitutions (GSUB), The compiling is instructed mostly through the command line, but there are also several Python macros that can be run directly in Fontlab.

We started off working with Adobe’s multiple master font AdobeSans, which is built into Acrobat to generate instances that mimic the width and weight of missing fonts in .pdf files. After generating a regular instance of the font in .pfa format, we created the FontMenuNameDB and GlyphOrderAndAliasDB files. The former determines the weights in the family, and how they appear in menus, whilst the latter sets the order of glyphs, converts working (friendly) glyph names to final (production) names, and assigns Unicode values to the glyphs. After a couple more skeleton text files were complete, we ran the MakeOTF command, and out popped a functioning font file.

On day two, Miguel took us through the CheckOutlines function, which is very similar to FontLab’s font audit function, checking for points at extremes, sharp junctions, crossed paths, incorrect contour directions, and overlaid points. The resulting text file listed the errors Miguel had introduced for the exercise, and after I’d corrected them, I was delighted to see CheckOutlines processing the revised .pfa smoothly without encountering any problems. The next task was to run the AutoHint tool, which is more intelligent than FontLab’s inbuilt routines, in that it reports inconsistencies in stem width (when individual hints don’t quite match up to the user-specified stem widths).

By the end of the second day, we’d created a family of eight hinted fonts from the AdobeSans masters, and the whole process seemed very promising. With a bit of practice, the FDK model is quite logical and more powerful than FontLab. The only thing I struggled with was setting up the correct directory structure, as certain files apply to the family and others need duplicating into the subdirectory for each weight or style.

Day three took us through Type 1 hinting, from alignment zones to stem widths to all those odd ‘blue’ settings that have always been so opaque. In a well-rehearsed and methodical way, Miguel managed to make even the most advanced production techniques completely clear. For example, the blue scale defines at what resolution (in fact PPM) the overshoot zones are rasterised: below this level, all overshoots are trimmed. Blue shift, on the other hand, determines the minimum amount of overshoot you wish to control: overshoots less than the blue shift amount will still be suppressed if they would rasterise smaller than half a pixel. We fiddled with these settings and output the resulting font to a .pdf. Acrobat has the ability to set the on-screen resolution (under Preferences/Page Display/Resolution) to allow direct on-screen proofing — a highly useful feature I hadn’t thought to look for.

By popular demand, the workshop continued into the weekend. Although nothing was planned in advance, a number of us wanted to understand better how to set up and manipulate multiple masters, how to do TrueType hinting and how to get our complex scripts working with anchors and mark positioning. Again, it was wonderful to have such complex ideas explained so expertly, building on the concepts from the previous three days.

Many thanks to Miguel and Adobe for a highly beneficial workshop.

Posted at 11:56am and tagged with: AFDKO, MATD, Reading, Miguel Sousa, hinting, OpenType, multiple masters, Adobe,.

There’s a phrase that pops up from time to time in the department; it’s probably a Gerry-ism. ‘Designing the design’.

My take on it is that before we start drawing letterforms and thinking about details like what style of serifs we’d like, there’s the important matter of how the thing should look holistically. Can I visualise the rhythm and texture on the page, the way the letters perform together? Am I aiming for a particular mood and tone? What connotations and atmospheric values would I like to suggest?

For a text face, these questions are primarily answered not at the glyph level, but at the level of the paragraph. The image below shows that typography also has an part to play, as the two pages are set in (different?) cuts of the transitional-modern face Baskerville. Even though a certain letter may not change much in its details, the countless repetition of those details can lead to a very different impact:

Dan Rhatigan, Monotype’s Senior Type Designer, (and Reading MATD alumnus) visited us last week, and suggested by this stage of the academic year, it is quite easy for students to let their designs get carried away, away from the briefs we set ourselves at the beginning. By now, everyone tends to be enjoying seeing their design take shape and there’s a temptation to experiment with all sorts of new ideas as we get more familiar with FontLab and our skills and knowledge increase.

This advice sent me deep into my paperwork to dig out the brief I’d filed away in November. Luckily, it’s quite a strict brief, so I hadn’t really needed to keep referring to it, as I have a clear idea what I’m aiming at. What was useful was looking back at the bits that dealt with what sort of typographic tone I wanted the letters to elicit, and I’d been very explicit in defining this, using words like ‘liveliness’, ‘flow’, ‘forward motion’ and ‘bright, cheerful shapes’.

Somehow, after pondering these ideas, I was able to clearly visualise how my Burmese should look on the page. My first attempts had been dominated by a fixation on individual letterforms and stylistic details, but following advice from Gerry and Fiona, I needed to have more unity and an overall plan for the script. Although I’m referring to Burmese lettering, signwriting and manuscripts for inspiration, the key to a readable typeface is having all the letters click together in paragraphs and not draw attention to their actual forms.

In the first line, I was trying interesting patterns of stress where the heaviest part of the stroke was opposite the apertures. I’d been inspired by 18th century metal type, which followed this pattern. However this didn’t lead to any consistency, and the shapes seemed to be fighting with each other. The lower sample shows a more considered approach to stroke modulation, and a smoother, much more even and harmonious tone. My Burmese now feels like it has a direction, which will no doubt be further refined as I go through the rest of the year.

Posted at 7:30pm and tagged with: Burmese, type design, font, MATD, Reading,.

I decided to take advantage of Gerard’s third visit of the year to finalise the relationship between my Latin serif and sans serif designs. Several people had remarked that the sans was looking too skinny, too small or too light, but I wasn’t really sure whether fixing it meant stretching the thing or redrawing completely. In the end it was an illuminating and actually quite easy process, despite the many dimensions at play.

The first thing to fix was the width. The sans was feeling too condensed, and Gerard advised me that the proportions of the lowercase /n/ for example should match between the serif and sans design, so I compared the ratios of height to width of both together, and found they were almost identical (in fact I’d pulled in the stems in the sans 5 units to compensate for the lack of serifs):

I used InDesign’s character menu to mechanically stretch the letters horizontally in steps from 100% up to 108% and ran some test prints:

When comparing my printed proofs with the serif design, I found that a horizontal scale of 102¼ % fitted nicely. In fact anything over 103% began to look as though the letters were a larger point size.

The next thing to fix was the stroke weight, which I did by hand in FontLab. I increased the width of the heavy strokes in increments of 4 units and found that 8 units was the right amount to give the same text colour as the serif face.

Finally, the expansion had messed up the letterfitting, so I had to reduce all the set widths to compensate. Again, I used InDesign to quickly proof different settings. The result was a reduction of 12 units all round, and this matches the serif very nicely. Both cuts may still be spaced a little widely, but as long as I remember to tweak them both at the same time, it should be no problem to alter the overall fitting.

Compared to the original, the final result (above) had a width of 102.25%, an increase of 8 units in stem weight and a negative tracking of 12 units. The image also shows a difference between two of the printers in the department: the Xerox on the right gives consistently darker results than the HP on the left. It goes to show that we should continue to proof on as many printers as possible, rather than relying on the results of one which may be an anomaly. Luckily we have six laser printers at our disposal in the department and can also use the offset press from time to time.

With these two styles reconciled, I’ve been trying to fix my Greek! Gerry had been a bit underwhelmed by my first attempt, which wasn’t altogether surprising as I’ve never drawn Greek letters before and don’t read the language. Due to my unfamiliarity, it seemed that I’d been focused on the stylistic details like terminals and stroke junctions instead of looking at the fundamental architecture of the letterforms. Interestingly that resonated with what Fiona had been getting at with my Burmese: try to settle on the essential proportions and relationships between letters before thinking about the modulation and stylistic treatments.

I’m really struggling to assimilate this advice, as I have a strong inclination to experiment with unexpected styling and dissonant harmony whilst keeping such details under the radar for text sizes and immersive reading. I need to remember not to run before I can walk. It can’t be all exciting until the basics are grasped, even if the forms look boring to start with. Step one leads to step two. I guess I’m seeing the forms and the styling as one process, enmeshed and depending on each other. Another problem is my typeface is trying to steer away from the stroke-and-tool model, and I want to let form and counterform have some independent rationale not following the ‘internal skeleton’ of each letter.

My solution so far seems to be to figure out what combinations of form and styling work well together. To help with this, I’ve started the Greek twice, with opposite modulations that affect the forms somewhat.

I’m not yet decided which model to follow, so I’ll keep working on both sets and make a decision later.

Posted at 3:43pm and tagged with: Greek, MATD, Reading, balance, font, harmonising, sans serif, script, serif, type design, typeface,.