Using artificial intelligence for text extraction can often require
handling privacy-sensitive text. To avoid revealing confidential
information, data owners and practitioners can use differential privacy,
a definition of privacy with provable guarantees. In this work, we show
how differential privacy can be applied to feature hashing. Feature
hashing is a common technique for handling out-of-dictionary vocabulary,
and for creating a lookup table to find feature weights in constant
time. One of the special qualities of feature hashing is that all
possible features are mapped to a discrete, finite output space. Our
proposed technique takes advantage of this fact, and makes hashed
feature sets Renyi-differentially private. The technique enables data
owners to privatize any model that stores the data-dependent weights in
a hash table, and provides protection against inference attacks on the
model output. As a case study, we show how we have implemented our
technique in commercial software that enables users to train text
sequence classifiers on their own documents, and share the classifiers
with other users without leaking training data. Results show that even
common words can be protected with 0.06, 10^-5)-differential privacy,
with only a 1% average reduction in Recall and no change in Precision.