Ben Mitchell's typo blog charting the excitement, activities and challenges of my 12 months' studying the MA in Typeface Design at Reading University.

Now with occasional ramblings about type-related things I find interesting.

Opinions are all my own.

This experimental sans was something I drew in my sketchbook whilst travelling in 2010. The concept behind the design was to see what would happen if black and white shapes became somewhat disconnected. Counterforms are made of perpendicular straight lines while outer edges are wide superellipses.


Despite its heavy weight, Argon maintains its counterspace areas in two special ways. Firstly, junctions of straight lines are drawn apart (as on M or N), which allows the counters to retain their full height. Secondly, joins between curves are given a pronounced notch or ‘bite’ out of the junction (as on B, R or 8). The result is a spurless, retro-futuristic display face.


After drawing the Latin in FontLab from sketches, I decided to try the same design with Thai script.


The family will include a couple of avant-garde outlined inverse cuts (shown below), possibly working as a layered font for «double foreground» special effects.

Posted at 1:08pm and tagged with: Argon, In progress, Thai, Latin, font, type design, typography,.

This example, using my typeface Mint, shows that creating an oblique style is not straightforward. Automatic slanting of the upright creates diagonally pinched letterforms. Stroke widths get messed up, angle of stress changes and curves completely lose their balance and tension. Type designers need to correct the effects of mechanlcal sloping to attain optically balanced obliques. One trick is to define a new set of curves that can be tweaked for individual letterforms; also it’s vital to keep comparing the upright and the oblique to check the speed of the curves and stroke weights are optically the same despite the translation.

(Click images to enlarge)

Posted at 12:32am and tagged with: obliques, slanted, sloped, upright, correction, optical, type design, font, italic, Mint,.

This week, the annual ATypI conference is taking place in Hong Kong. Unfortunately I’m not going to be attending, but its theme, ‘Between black and white’, has prompted me to think in more depth about how the principles of notan can be implemented in typeface design, not just as a curiosity, but as a pragmatic tool to enhance readability.

One of the attractions of type design is in the breadth of its influences, and a natural borrowing is from the Japanese concept of notan. The term refers to the art of balancing the opposition of black and white in Japanese paintings, and it has been somewhat hijacked of late by type designers referring to the interplay of the black (foreground) and white (background) elements we design with when creating type. Specifically, it alludes to the dissociation of the two, allowing the designer to intentionally mould the counterforms (white areas) as desired, not just to be determined by the black lettershapes. The byproduct of this separation between letter and counterspace is that it forces the type designer to abandon the stroke model and instead adjust letters’ outlines in isolation. To me, how this can be utilised in a pragmatic way is a massively interesting area for the discipline of type design, not least because it is so little explored.

As I mentioned when starting out on my MA typeface, Lumen, my aim was to make the letters flow together, so that in dictionary settings, where things are often rather choppy, smooth reading would be enhanced by legato, connected word-images. The ultimately connected type style would of course be a script face, with joined up writing. For obvious reasons this is not suitable for immersive reading or for setting a dictionary. How then can letters be linked together? How about making the white, rather than the black, be the part that joins up? A perfect opportunity to employ original creative responses to foster the principles of readability.

This sort of ‘pure’ design had been tackled by Evert Bloemsma in his Legato typeface; as Kris Sowersby notes, the design is ‘free of stylistic conceits’ with the important decisions made in pursuit of the goal of connectedness. What Bloemsma had done was rotate (or skew) the inner counters in the opposite direction to the outer black forms, breaking up the letters’ rigid uprightness and giving the white space a direction, rather than letting it simply exist as a dead space or byproduct of the black letters. While I didn’t want to follow this logic to the same conclusion as Bloemsma, I found in his theory a seed for further development.


Above: in most typefaces, based on a tool-and-stroke model, the white counterspaces are simply a byproduct of where the edges of the tool fall. In Zapf Chancery, the track of the pen (green) and the width and angle of the nib (yellow) determine the edges of the stroke at every point, and the stroke’s edges form the boundary between black and white. Now type design can be emancipated from the tool-and-stroke model with interesting results.

I took the idea of designing the whites semi-independently, but sought to make them connect across letters. Underpinning the architecture of every letterform was the theory that the white space should bend and flow to lead the eye smoothly along the reading line. The inner contours of letters mainly project outwards towards the adjacent letters, and employ curves of a lower frequency than the outside black edges. As well as helping the letters compose into harmonious images, this flattening of the counters emphasises the horizontal reading direction from left to right.


The image above shows that the countershapes in Helvetica are directed inwards (or at nothing), making the letters stand alone, referring only to themselves; the lack of activity between letters also creates dead space that breaks words apart. In Lumen, the countershapes have been designed to harmonise and lead the eye across intervening letterspaces.

These days, people are increasingly asking whether there is any need for new typefaces and whether everything interesting hasn’t already been tried. I’m strongly opposed to their arguments: type design, like the disciplines of architecture or music composition, is a response to particular circumstances of time and place, and at its best, blends together personal expression, original and critical thinking, an appreciation of type history and underlying theories, and skills polished through extensive practice. In our increasingly connected world, the interesting question is what other concepts can type design borrow and benefit from?

Posted at 1:57pm and tagged with: one column, notan, type design, font, stroke model, Bloemsma,.

Our fantastic year at Reading is now complete, and the class of 2012 has dispersed to make way for the incoming students — a talented looking bunch judging by some of the websites I’ve seen.

The final weeks of term and the summer holiday flew past, with our typeface submitted towards the end of June, along with the Reflection on Practice that charts our progress and explains our decisions. The summer months saw us fully absorbed in research and writing, with everyone working their hardest and going about with anguished dissertation faces. Oddly enough, at the start of the course, the dissertation was the one part I was not looking forward to, but my topic — Burmese printing types — is an area that has not been studied before, making it extremely rewarding to pull together the histories into a coherent narrative. The amount of material covered was quite astonishing, as the topic covers two hundred years of printing, and involved multiple visits to London’s libraries, a whole lot of Googling and careful examination of type specimens to see how the metal types were composed. Hopefully I’ll be sharing some of the interesting parts in a future blog post, since the story of Burmese type parallels the developments in type of every other world script, and was at least for me, very enlightening.

I’m now back in Sussex, working on freelance commissions. I’ve got three typeface projects on the go, and graphic design jobs to fill up the gaps, so I plan to keep this blog going as I develop and produce more work. As one typophile said, the Masters is just a beginning :)

Posted at 9:47am.

August 16th 2012

Reblogged from |6 notes |#

typolka:

Mastering Type – Mota Italic – Berlin

The Mastering Type exhibition now on at Mota Italic in Berlin features the work of type design students from KABK and Reading.

Posted at 3:00pm.

typolka:

Mastering Type – Mota Italic – Berlin

The Mastering Type exhibition now on at Mota Italic in Berlin features the work of type design students from KABK and Reading.

Lovely printed sheets from my MATD typeface specimen!

Posted at 4:24pm and tagged with: Lumen, specimen, spread, type design, typeface, multi-script,.

Recently, we were visited by Will Hill, ex-Reading student and now Senior Lecturer in Graphic Design at Anglia Ruskin University. His lecture touched upon something that’s been bothering me for some time…

From printing’s beginnings, type has taken its cues from inscriptional lettering, handwriting and calligraphy. Over the next 500 years, type started to diverge from hand-tooled forms, becoming slowly emancipated from these external sources, and becoming more standardised; new typographic environments and developments in technology both fuelled and fed off the evolving spectrum of typeforms.

But until the end of the 20th century, type designers were still constrained to using the traditional technologies of production: drawing letter patterns by hand, cutting punches and casting metal type. With the advent of digital type drawing, those technologies are slowly being left behind, with many type designers nowadays drawing letters, unmediated by paper, directly on screen.

In The Stroke, Gerrit Noordzij reduces typeforms to handwritten strokes:  letter shapes are unavoidably composed of the strokes of our pen or pencil. The stroke is the unassailable basis (‘fundamental artefact’) of a shape. For Noordzij, outlines do not define a shape, they are simply the bounds of a shaped stroke. Unfortunately, this is only one way of seeing things, and it relies on drawing letters from the inside, as though tracking the ductus with a tool. It is not clear how his theory could apply to computer-generated outlines not conceived with penstrokes in mind.

However, Noordzij is right that most of what we read is based on models of how we write. Adobe’s Robert Slimbach states “It makes sense that type designers look to the established archetypes for inspiration…Because the familiar, traditional form — which grew out of centuries of handwriting practice — remains embedded in readers’ minds, it is crucial that designers of text typefaces work within its bounds.” (Quote from the Arno Pro specimen.)

But let’s step back and think about this: why should what we read and what we write be related? After all, the physiology of the eye and that of the hand do not in any way imply a logical connection. Are the letterforms that come out of our hands when we write the best possible forms for reading?

Some people seem to think so. So-called ‘infant’ typefaces with the single-storey /ɑ/ and /ɡ/ are very popular among children’s book publishers. But perhaps these publishers have conflated reading and writing. Studies have shown that children do not find ‘adult’ versions of these letters especially problematic, and understand that one version is for reading, the other for writing. (Sue Walker, 2003). Adults generally don’t find variant forms problematic (though some people prefer their handwriting to use typographical forms of the /a/ and /g/). And letters in other scripts often have differences between handwriting and type. Doesn’t this imply the connection between reading and writing is not as causal as we tend to think?

So here’s the question: type is not writing. So why has the influence of writing persisted for so long in type design?

Will Hill cast an interesting light over the matter in his lecture. He sees the stroke-and-tool paradigm as a model that ensures coherence in type design. It provides a set of ‘relational constraints’ or a ‘behaviour pattern’ that makes all the letters in a design belong to each other. Our firmly entrenched and largely unquestioned conservatism in following the stroke-and-tool model acts as a kind of safety net that gives us a set of design parameters that ensure consistency in our typeface.

If that’s the case, and with technology now at a stage where designers can work directly on screen, one would now expect there to be a quiet revolution in the way we think about type, and new models should have the chance to spring up.

Jeremy Tankard’s new Fenland typeface shows that this is indeed the case. Instead of basing Fenland’s ‘relational constraints’ on the stroke paradigm, the letters are formed by bending hypothetical steel tubes. In direct contradiction to Noordzij’s theory, Tankard abandons a stroke model and begins his drawings with outlines. The curves bend around the letterforms instead of following the shape of some internal ‘skeleton’. The curves really do unexpected things, collapsing in on themselves as they go around corners and throwing away the conventions of where thick and thin strokes appear.

Which brings us to a second reason why the stroke paradigm persists. All the questions the type designer needs to ask in designing letters can be answered by considering the stroke model, what tool is used and what logic is being applied to that stroke. Therefore, it is a paradigm that sets out sufficient parameters for designing type. Additionally, as Noordzij shows us, the model provides enough variability for different forms to emerge: expansion, translation, running and interrupted constructions can be freely combined to different degrees, generating a huge spectrum of possibilities.

Much as Tankard’s tubular premise is fascinating and original, it isn’t quite sufficient to provide all the answers to how the letters should look. For example, he has had to also define a particular ‘stroke’ order,  which strokes are primary, and whether they connect in a ‘running’ or ‘interrupted’ way: the tube model itself says nothing about these matters, and the answers have to be decided on a letter-by-letter basis. This doesn’t promote the consistency that the stroke paradigm is so good at ensuring. The skill in Fenland is in Tankard’s ability to reconcile the letters consistently without a sufficiently explicit behaviour pattern.

In my Mint typeface, started in 2009, I began to see the outlines as primary, rather than the strokes. Although the strokes are still very much apparent, conceiving things this way allowed some fresh thinking. The outlines alternate between shaping the black letterforms and locking in the white counterspaces. The interplay between black and white (similar to the Japanese design concept of ‘notan’) gives the white page a more active role in the typography of the text block, in a way the stroke model wouldn’t naturally elicit. But again here, the ‘outline’ model doesn’t provide exhaustive parameters to ensure consistency.



The MATDs have now submitted their typefaces (woo!) and are moving on to the next projects, but it’s definitely time to experiment with these questions and see what alternative models can offer.

Posted at 1:07pm and tagged with: typography, stroke, Noordzij, type design, handwriting, construction, type, reading, writing, design, Fenland, stroke model,.

I’ve been meaning to give Spiro curves a try for a while. By now, I’m fairly confident with Bezier curves, but I’m always interested in finding other ways to do things, and Spiro curves have a very different quality to them. As we all know, Bezier splines are especially nightmarish when drawing curves that don’t have an even radius — think of a slightly bowed vertical stem leading into a much tighter serif bracket. The problem is Beziers only allow control of the slope of a curve, without balancing the curvature. So extra points and handles are required to create smoothly connecting curves. With spiro curves, on the other hand, as points are moved, the equation instantly rebalances to maintain a smoothly changing curvature, even when the curves have a significant change in radius from one point to the next. There are no control handles to worry about, making it easier to quickly adjust letterforms.

I used Raph Levien’s Spiro tools on FontForge. FontForge was a complete nuisance to install, and doesn’t feel very natural yet, but my first adventures with spiro curves seem quite promising, and it would be hugely interesting if FontLab could add support for this alternative way of drawing in the future.

The best thing about Spiro on FontForge is when the equation can’t be solved and the curve pops into loopy madness! I’m looking forward to seeing the more sensible results of this little experiment.

Posted at 12:29am and tagged with: Spiro, curve, bezier, spline, letteform, clothoid, FontForge,.

I’ve been meaning to give Spiro curves a try for a while. By now, I’m fairly confident with Bezier curves, but I’m always interested in finding other ways to do things, and Spiro curves have a very different quality to them. As we all know, Bezier splines are especially nightmarish when drawing curves that don’t have an even radius — think of a slightly bowed vertical stem leading into a much tighter serif bracket. The problem is Beziers only allow control of the slope of a curve, without balancing the curvature. So extra points and handles are required to create smoothly connecting curves. With spiro curves, on the other hand, as points are moved, the equation instantly rebalances to maintain a smoothly changing curvature, even when the curves have a significant change in radius from one point to the next. There are no control handles to worry about, making it easier to quickly adjust letterforms.
I used Raph Levien’s Spiro tools on FontForge. FontForge was a complete nuisance to install, and doesn’t feel very natural yet, but my first adventures with spiro curves seem quite promising, and it would be hugely interesting if FontLab could add support for this alternative way of drawing in the future. 
The best thing about Spiro on FontForge is when the equation can’t be solved and the curve pops into loopy madness! I’m looking forward to seeing the more sensible results of this little experiment.

David Březina () came to visit us last week, to talk through his career in type design and his award-winning, multi-script foundry, Rosetta, to critique our typefaces, and to ask us an impossible question. What he wanted to know was how we plan to create original work in our typeface design careers over the next ten years. A ten-year plan is not something I’d naturally sit down and think about, so it certainly struck me as an intriguing question. How on earth can I set about planning my long-term creativity? It was the kind of meta question that demands you take several steps back from the process itself and consider how one approaches one’s approach.

David suggested one way to respond to this question might be to map the design space in which to plot typefaces, and use this to identify areas that have not yet been exploited. Maps have always seemed useful, so I started to sketch out how I personally categorise designs. It turns out that I judge typefaces based on two axes, which seem to run from functional/sober to artistic/characterful and from humanist/calligraphic to constructed/experimental.

However, I quickly realised that there are two aspects to a typeface: its form and its styling. These aspects may need to be categorised separately — for example Gill Sans Shadowed has rather restrained and conventional forms, but more eccentric, trendy styling. This may mean typefaces need to be classified twice, once according to their form, and once for their styling.

I plotted a few typefaces to see if the map would work:

This sort of thing is hugely subjective, but could be useful in talking to clients, especially if illustrated with example typefaces. I suspect it could be useful in finding contrasting typefaces that work together nicely.

From this map, I wondered if everybody isn’t trying to achieve the same goals in type design: the design space in the middle of the chart should be some sort of sweet spot where ‘perfect’ tension arises through the interplay of conventionality and playful creativity. Nobody generally wants a bland or cold typeface, but neither do they want a wacky, overstated thing that won’t stop shouting. Therefore the best way to create original work is to avoid the crowded space where everything blends together. One option might be to think about balance rather than blending. Somehow the idea of yin and yang popped into my head, where the black contains a spot of white and the white has a spot of black. Why not let’s try and apply this to design? Instead of blending the opposites, draw on them both but keep their characteristics distinct. I’m sure some interesting possibilities lie that way.

There could be some other approaches that promote originality. Originality seems to stem from individuals creating work that is truly personal. FontLab’s bezier wrangling interface results in certain kinds of curves, but sketching with pencil and paper produces shapes of a different quality. So it follows that using a range of different tools (and I include different software in my definition of ‘tools’) will result in more personal outputs.

It seems also to make sense to study a range of different typefaces to see how others have solved certain problems, and broaden our repertoire of what constitutes ‘acceptable’ or ‘conventional’; also, to plot new areas on the map. Reading about type allows deeper, theoretical or historical concepts to inform our choices.

Lastly, typefaces solve problems, so seeking new problems is very likely to lead to original ideas.

Following David’s stay, we were delighted to welcome Reading alumnus Paul Barnes (@paulobarnesi) from independent foundry Commercial Type to talk about his approach to type. Paul emphasised the way originality can be grounded in a sensitive appraisal of historical sources. His main interest lies in 19th century British typefaces in the ilk of Baskerville, but his expertise also includes European influences going back to the 17th century. He finds original ideas evolve, interestingly, from being faithful to traditional letterforms, perhaps treating them in new ways stylistically. For example, his typeface developed for the National Trust took traditional English letterforms from the 17th century, converted it to a sans-serif design and applied Optima-style modulation:

Paul’s typeface experiment, Marian epitomises this approach: he took a selection of typefaces that represent different historical eras, and wondered what they would look like if stripped down to their barest form. He rigorously consulted thousands of sources to develop a very well rounded judgment of the typefaces’ inherent characteristics, and then drew their strokes in the thinnest hairlines. The result is an unexpectedly elegant family of display faces and I’m looking forward to seeing how graphic designers treat and use it.

Originality in typeface design, then, is personal to each of us, so we shouldn’t aim to be prescriptive. It is somehow linked to inspiration, and to a full understanding of historic context and precedents. It can be offering a new take on a well-loved model, or it can be driven by a synthetic exploration of concepts. It’s been a fascinating start to our final term, and the meta-thinking will serve as a continual, quiet reminder to produce better informed work.

With thanks to David and Paul for their generosity and encouragement.

Posted at 12:07am and tagged with: originality, typeface, design, type design, MATD, Reading, Paul Barnes, David Brezina,.

The year has flown past at an alarming speed — not that it’s over yet, but as our project deadlines are in June, it feels like we’re very much in the final stretch. After our fantastic field trip to Antwerp, Amsterdam, the Hague, Haarlem and Bussum, the Easter break gave us some much needed breathing room to get down to some serious business with FontLab.

My serif face now has the complete character set (aside from Thai, which I’ve started but am not sure whether to continue — spending time on it could seriously compromise the quality of my other styles) and spacing is almost done. The sans face was feeling a little bland or rigid or something, and I’ve added more warmth and softness by drawing it slightly away from where it started. One thing I’ve realised through the year is that even with the strongest concept behind a design, or perhaps especially with a strong concept, there is a time to let go of one’s fixed ideas about a typeface and realise that things can evolve in their own direction and gain a stronger identity. So although I had begun with some fusion of Excoffon’s and Bloemsma’s ideas, I’ve allowed myself to open the gates to my own expression. I think that’s happened in a few areas of my typeface, so perhaps there’s some higher conclusion one can draw about being attentive to a design maturing and outgrowing its origins.

Of course it’s not always certain which way to take a design, so I tried a couple of possibilities before settling on a more humanist option. Top row shows the design that wasn’t quite working for me, middle rows show exploration of a couple of new options, and bottom row shows something I’m more comfortable with.

Bold is currently under development, then I’m hoping to give condensed a go, if time permits.

Posted at 11:31pm and tagged with: sans serif, typeface, MATD, font, design,.