This is the code:
print '"' + title.decode('utf-8', errors='ignore') + '",' \
' "' + title.decode('utf-8', errors='ignore') + '", ' \
'"' + desc.decode('utf-8', errors='ignore') + '")'
title and desc are returned by Beautiful Soup 3 (p[0].text and p[0].prettify) and as far as I can figure out from BeautifulSoup3 documentation are UTF-8 encoded.
If I run
python.exe script.py > out.txt
I get following error:
Traceback (most recent call last):
File "script.py", line 70, in <module>
'"' + desc.decode('utf-8', errors='ignore') + '")'
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf8' in position 264
: ordinal not in range(128)
However if I run
python.exe script.py
I get no error. It happens only if output file is specified.
How to get good UTF-8 data in the output file?
PYTHONIOENCODING
environment variable to "utf8" and letting the chips fall where they may - tchrist 2012-04-04 20:22
errors='ignore'
, it hides errors in your code - agf 2012-04-04 20:23
print
your byte string directly. However what you get from BeautifulSoup is typically a Unicode string not a byte string. When you print
title.decode()
you are implicitly encoding the Unicode string to bytes so that it can be decoded, then explictly decoding to Unicode, then implicitly encoding back to bytes that can be printed - bobince 2012-04-05 10:56
s= u'"%s", "%s", "%s"' % (title, title, desc)
then print s.encode('utf-8')
if you are sure you always want UTF-8 bytes out - bobince 2012-04-05 11:01
You can use the codecs module to write unicode data to the file
import codecs
file = codecs.open("out.txt", "w", "utf-8")
file.write(something)
'print' outputs to the standart output and if your console doesn't support utf-8 it can cause such error even if you pipe stdout to a file.
Windows behaviour in this case is a bit complicated. You should listen to other advices and do internally use unicode for strings and decode during input.
To your question, you need to print encoded strings (only you know which encoding!) in case of stdout redirection, but you have to print unicode strings in case of simple screen output (and python or windows console handles conversion to proper encoding).
I recommend to structure your script this way:
# -*- coding: utf-8 -*-
import sys, codecs
# set up output encoding
if not sys.stdout.isatty():
# here you can set encoding for your 'out.txt' file
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
# next, you will print all strings in unicode
print u"Unicode string ěščřžý"
Update: see also other similar question: Setting the correct encoding when piping stdout in Python
It makes no sense to convert text to unicode in order to print it. Work with your data in unicode, convert it to some encoding for output.
What your code does instead: You're on python 2 so your default string type (str
) is a bytestring. In your statement you start with some utf-encoded byte strings, convert them to unicode, surround them with quotes (regular str
that are coerced to unicode in order to combine into one string). You then pass this unicode string to print
, which pushes it to sys.stdout
. To do so, it needs to turn it into bytes. If you are writing to the Windows console, it can negotiate somehow, but if you redirect to a regular dumb file, it falls back on ascii and complains because there's no loss-less way to do that.
Solution: Don't give print
a unicode string. "encode" it yourself to the representation of your choice:
print "Latin-1:", "unicode über alles!".decode('utf-8').encode('latin-1')
print "Utf-8:", "unicode über alles!".decode('utf-8').encode('utf-8')
print "Windows:", "unicode über alles!".decode('utf-8').encode('cp1252')
All of this should work without complaint when you redirect. It probably won't look right on your screen, but open the output file with Notepad or something and see if your editor is set to see the format. (Utf-8 is the only one that has a hope of being detected. cp1252 is a likely Windows default).
Once you get that down, clean up your code and avoid using print for file output. Use the codecs
module, and open files with codecs.open
instead of plain open.
PS. If you're decoding a utf-8
string, conversion to unicode should be loss-less: you don't need the errors=ignore
flag. That's appropriate when you convert to ascii or Latin-2 or whatever, and you want to just drop characters that don't exist in the target codepage.
decode
more than once. In fact, you shouldn’t be calling it at all. Just set the encoding on standard output and be done with it. The bug (Python’s, not yours) is that Python has this really annoying behavior in that it treats redirected output differently than it does unredirected output - tchrist 2012-04-04 20:15