[c#] 手動でエンコーディングを指定せずにC#で文字列の一貫したバイト表現を取得するにはどうすればよいですか?


あなたの文字列( ASCIIUTF-8 、...)のエンコーディングに依存します。


byte[] b1 = System.Text.Encoding.UTF8.GetBytes (myString);
byte[] b2 = System.Text.Encoding.ASCII.GetBytes (myString);


string pi = "\u03a0";
byte[] ascii = System.Text.Encoding.ASCII.GetBytes (pi);
byte[] utf8 = System.Text.Encoding.UTF8.GetBytes (pi);

Console.WriteLine (ascii.Length); //Will print 1
Console.WriteLine (utf8.Length); //Will print 2
Console.WriteLine (System.Text.Encoding.ASCII.GetString (ascii)); //Will print '?'


内部的には、.NET FrameworkはUTF-16を使用して文字列を表します。したがって、.NETが使用する正確なバイト数を取得したい場合は、 System.Text.Encoding.Unicode.GetBytes (...)使用しSystem.Text.Encoding.Unicode.GetBytes (...)

詳細については、.NET Framework (MSDN)の文字エンコーディングを参照してください。



私は文字列を暗号化するつもりです。 私は変換せずにそれを暗号化することができますが、私はまだエンコーディングがここでプレーするようになる理由を知りたいです。

また、なぜエンコーディングを考慮する必要がありますか? 文字列がどのバイトに格納されているかを単に取得することはできませんか? なぜ文字エンコーディングに依存するのですか?

It depends on what you want the bytes FOR

This is because, as Tyler so aptly said , "Strings aren't pure data. They also have information ." In this case, the information is an encoding that was assumed when the string was created.

Assuming that you have binary data (rather than text) stored in a string

This is based off of OP's comment on his own question, and is the correct question if I understand OP's hints at the use-case.

Storing binary data in strings is probably the wrong approach because of the assumed encoding mentioned above! Whatever program or library stored that binary data in a string (instead of a byte[] array which would have been more appropriate) has already lost the battle before it has begun. If they are sending the bytes to you in a REST request/response or anything that must transmit strings, Base64 would be the right approach.

If you have a text string with an unknown encoding

Everybody else answered this incorrect question incorrectly.

If the string looks good as-is, just pick an encoding (preferably one starting with UTF), use the corresponding System.Text.Encoding.???.GetBytes() function, and tell whoever you give the bytes to which encoding you picked.

You can use the following code for conversion between string and byte array.

string s = "Hello World";

// String to Byte[]

byte[] byte1 = System.Text.Encoding.Default.GetBytes(s);

// OR

byte[] byte2 = System.Text.ASCIIEncoding.Default.GetBytes(s);

// Byte[] to string

string str = System.Text.Encoding.UTF8.GetString(byte1);

If you really want a copy of the underlying bytes of a string, you can use a function like the one that follows. However, you shouldn't please read on to find out why.

        EntryPoint = "memcpy",
        CallingConvention = CallingConvention.Cdecl,
        SetLastError = false)]
private static extern unsafe void* UnsafeMemoryCopy(
    void* destination,
    void* source,
    uint count);

public static byte[] GetUnderlyingBytes(string source)
    var length = source.Length * sizeof(char);
    var result = new byte[length];
        fixed (char* firstSourceChar = source)
        fixed (byte* firstDestination = result)
            var firstSource = (byte*)firstSourceChar;

    return result;

This function will get you a copy of the bytes underlying your string, pretty quickly. You'll get those bytes in whatever way they are encoding on your system. This encoding is almost certainly UTF-16LE but that is an implementation detail you shouldn't have to care about.

It would be safer, simpler and more reliable to just call,


In all likelihood this will give the same result, is easier to type, and the bytes will always round-trip with a call to



    string text = "string";
    byte[] array = System.Text.Encoding.UTF8.GetBytes(text);

The result is:

[0] = 115
[1] = 116
[2] = 114
[3] = 105
[4] = 110
[5] = 103

The closest approach to the OP's question is Tom Blodget's, which actually goes into the object and extracts the bytes. I say closest because it depends on implementation of the String Object.

"Can't I simply get what bytes the string has been stored in?"

Sure, but that's where the fundamental error in the question arises. The String is an object which could have an interesting data structure. We already know it does, because it allows unpaired surrogates to be stored. It might store the length. It might keep a pointer to each of the 'paired' surrogates allowing quick counting. Etc. All of these extra bytes are not part of the character data.

What you want is each character's bytes in an array. And that is where 'encoding' comes in. By default you will get UTF-16LE. If you don't care about the bytes themselves except for the round trip then you can choose any encoding including the 'default', and convert it back later (assuming the same parameters such as what the default encoding was, code points, bug fixes, things allowed such as unpaired surrogates, etc.

But why leave the 'encoding' up to magic? Why not specify the encoding so that you know what bytes you are gonna get?

"Why is there a dependency on character encodings?"

Encoding (in this context) simply means the bytes that represent your string. Not the bytes of the string object. You wanted the bytes the string has been stored in -- this is where the question was asked naively. You wanted the bytes of string in a contiguous array that represent the string, and not all of the other binary data that a string object may contain.

Which means how a string is stored is irrelevant. You want a string "Encoded" into bytes in a byte array.

I like Tom Bloget's answer because he took you towards the 'bytes of the string object' direction. It's implementation dependent though, and because he's peeking at internals it might be difficult to reconstitute a copy of the string.

Mehrdad's response is wrong because it is misleading at the conceptual level. You still have a list of bytes, encoded. His particular solution allows for unpaired surrogates to be preserved -- this is implementation dependent. His particular solution would not produce the string's bytes accurately if GetBytes returned the string in UTF-8 by default.

I've changed my mind about this (Mehrdad's solution) -- this isn't getting the bytes of the string; rather it is getting the bytes of the character array that was created from the string. Regardless of encoding, the char datatype in c# is a fixed size. This allows a consistent length byte array to be produced, and it allows the character array to be reproduced based on the size of the byte array. So if the encoding were UTF-8, but each char was 6 bytes to accommodate the largest utf8 value, it would still work. So indeed -- encoding of the character does not matter.

But a conversion was used -- each character was placed into a fixed size box (c#'s character type). However what that representation is does not matter, which is technically the answer to the OP. So -- if you are going to convert anyway... Why not 'encode'?

BinaryFormatter bf = new BinaryFormatter();
byte[] bytes;
MemoryStream ms = new MemoryStream();

string orig = "喂 Hello 谢谢 Thank You";
bf.Serialize(ms, orig);
ms.Seek(0, 0);
bytes = ms.ToArray();

MessageBox.Show("Original bytes Length: " + bytes.Length.ToString());

MessageBox.Show("Original string Length: " + orig.Length.ToString());

for (int i = 0; i < bytes.Length; ++i) bytes[i] ^= 168; // pseudo encrypt
for (int i = 0; i < bytes.Length; ++i) bytes[i] ^= 168; // pseudo decrypt

BinaryFormatter bfx = new BinaryFormatter();
MemoryStream msx = new MemoryStream();            
msx.Write(bytes, 0, bytes.Length);
msx.Seek(0, 0);
string sx = (string)bfx.Deserialize(msx);

MessageBox.Show("Still intact :" + sx);

MessageBox.Show("Deserialize string Length(still intact): " 
    + sx.Length.ToString());

BinaryFormatter bfy = new BinaryFormatter();
MemoryStream msy = new MemoryStream();
bfy.Serialize(msy, sx);
msy.Seek(0, 0);
byte[] bytesy = msy.ToArray();

MessageBox.Show("Deserialize bytes Length(still intact): " 
   + bytesy.Length.ToString());


System.Text.Encoding.UTF8.GetBytes("TEST String");

With the advent of Span<T> released with C# 7.2, the canonical technique to capture the underlying memory representation of a string into a managed byte array is:

byte[] bytes = "rubbish_\u9999_string".AsSpan().AsBytes().ToArray();

Converting it back should be a non-starter because that means you are in fact interpreting the data somehow, but for the sake of completeness:

string s;
    fixed (char* f = &bytes.AsSpan().NonPortableCast<byte, char>().DangerousGetPinnableReference())
        s = new string(f);

The names NonPortableCast and DangerousGetPinnableReference should further the argument that you probably shouldn't be doing this.

Note that working with Span<T> requires installing the System.Memory NuGet package .

Regardless, the actual original question and follow-up comments imply that the underlying memory is not being "interpreted" (which I assume means is not modified or read beyond the need to write it as-is), indicating that some implementation of the Stream class should be used instead of reasoning about the data as strings at all.

The key issue is that a glyph in a string takes 32 bits (16 bits for a character code) but a byte only has 8 bits to spare. A one-to-one mapping doesn't exist unless you restrict yourself to strings that only contain ASCII characters. System.Text.Encoding has lots of ways to map a string to byte[], you need to pick one that avoids loss of information and that is easy to use by your client when she needs to map the byte[] back to a string.

Utf8 is a popular encoding, it is compact and not lossy.

Two ways:

public static byte[] StrToByteArray(this string s)
    List<byte> value = new List<byte>();
    foreach (char c in s.ToCharArray())
    return value.ToArray();


public static byte[] StrToByteArray(this string s)
    s = s.Replace(" ", string.Empty);
    byte[] buffer = new byte[s.Length / 2];
    for (int i = 0; i < s.Length; i += 2)
        buffer[i / 2] = (byte)Convert.ToByte(s.Substring(i, 2), 16);
    return buffer;

I tend to use the bottom one more often than the top, haven't benchmarked them for speed.


例えば、パスワードハッシュなどのバイト配列を格納して作成されたSQL Serverから文字列が来ると、悪いことです。 何かを落とすと、無効なハッシュが格納されます。もしXMLに保存したいのであれば、XMLライターはそれが見つからないサロゲートの例外を落とすので、元のままにします。

だから、私はこのような場合にはバイト配列のBase64エンコーディングを使用しますが、ちょっと、インターネット上でC#でこれに対して唯一の解決策があり、バグがあり、唯一の方法なので、バグを修正して書きました手順。 あなたは、将来のgooglersです:

public static byte[] StringToBytes(string str)
    byte[] data = new byte[str.Length * 2];
    for (int i = 0; i < str.Length; ++i)
        char ch = str[i];
        data[i * 2] = (byte)(ch & 0xFF);
        data[i * 2 + 1] = (byte)((ch & 0xFF00) >> 8);

    return data;

public static string StringFromBytes(byte[] arr)
    char[] ch = new char[arr.Length / 2];
    for (int i = 0; i < ch.Length; ++i)
        ch[i] = (char)((int)arr[i * 2] + (((int)arr[i * 2 + 1]) << 8));
    return new String(ch);

simple code with LINQ

string s = "abc"
byte[] b = s.Select(e => (byte)e).ToArray();

EDIT : as commented below, it is not a good way.

but you can still use it to understand LINQ with a more appropriate coding :

string s = "abc"
byte[] b = s.Cast<byte>().ToArray();

これはよくある質問です。 著者が質問していることを理解することは重要であり、最も一般的な必要性とは異なることがあります。 コードが必要ない場所でのコードの誤用を防ぐために、私は後で最初に答えました。


すべての文字列には文字セットとエンコーディングがあります。 System.StringオブジェクトをSystem.Byte配列に変換すると、文字セットとエンコーディングはSystem.Byteません。 大部分の用途では、必要な文字セットとエンコーディングを知っていて、.NETでは「変換を伴うコピー」が簡単です。 適切なEncodingクラスを選択するだけです。

// using System.Text;
Encoding.UTF8.GetBytes(".NET String to byte array")

変換では、ターゲットの文字セットまたはエンコーディングがソースにある文字をサポートしないケースを処理する必要があります。 いくつかの選択肢があります:例外、置換、またはスキップ。 デフォルトのポリシーは '?'を代用することです。

// using System.Text;
var text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("You win €100")); 
                                                      // -> "You win ?100"


注: System.String 、ソース文字セットはUnicodeです。

唯一わかりにくいのは、.NETがその文字セットの特定のエンコーディングの名前に文字セットの名前を使用することです。 Encoding.UnicodeEncoding.Unicodeと呼びます。

それはほとんどの用途のためです。 それがあなたが必要とするものなら、ここで読むことをやめてください。 あなたがエンコーディングが何であるかを理解していないなら、楽しいjoelonsoftware.com/articles/Unicode.htmlてください。





C#での文字と文字列の処理は、Unicodeエンコーディングを使用します。 char型はUTF-16コード単位を表し、string型はUTF-16コード単位のシーケンスを表します。


Encoding.Unicode.GetBytes(".NET String to byte array")

しかし、エンコーディングの言及を避けるために、別の方法で行う必要があります。 中間データ型が受け入れ可能な場合は、概念的なショートカットがあります。

".NET String to byte array".ToCharArray()

Mehrdadの答えはBlockCopyを使ってこのChar配列をByte配列に変換する方法を示しています。 ただし、これは文字列を2回コピーします。 また、エンコード固有のコードSystem.Charも明示的に使用します。

Stringが格納されている実際のバイトに到達する唯一の方法は、ポインタを使用することです。 fixedステートメントでは、値のアドレスを取ることができます。 C#の仕様から:


そのために、コンパイラは、 RuntimeHelpers.OffsetToStringDataを使用して文字列オブジェクトの他の部分にコードスキップを書き込みます。 だから、生のバイトを取得するには、単に文字列へのポインタを作成し、必要なバイト数をコピーしてください。

// using System.Runtime.InteropServices
unsafe byte[] GetRawBytes(String s)
    if (s == null) return null;
    var codeunitCount = s.Length;
    /* We know that String is a sequence of UTF-16 codeunits 
       and such codeunits are 2 bytes */
    var byteCount = codeunitCount * 2; 
    var bytes = new byte[byteCount];
    fixed(void* pRaw = s)
        Marshal.Copy((IntPtr)pRaw, bytes, 0, byteCount);
    return bytes;

@CodesInChaosが指摘するように、結果はマシンのエンディアンに依存します。 しかし、質問の著者はそれに関係していません。


public static byte[] StrToByteArray(string str)
   System.Text.UTF8Encoding  encoding=new System.Text.UTF8Encoding();
   return encoding.GetBytes(str);



c# c#   .net .net   string