Search Results

Search found 5202 results on 209 pages for 'char'.

Page 5/209 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Pinvoke- to call a function with pointer to pointer to pointer parameter

    - by jambodev
    complete newbe in PInvoke. I have a function in C with this signature: int addPos(int init_array_size, int *cnt, int *array_size, PosT ***posArray, PosT ***hPtr, char *id, char *record_id, int num, char *code, char *type, char *name, char *method, char *cont1, char *cont2, char *cont_type, char *date1, char *date_day, char *date2, char *dsp, char *curr, char *contra_acc, char *np, char *ten, char *dsp2, char *covered, char *cont_subtype, char *Xcode, double strike, int version, double t_price, double long, double short, double scale, double exrcised_price, char *infoMsg); and here is how PosT looks like: typedef union pu { struct dpos d; struct epo e; struct bpos b; struct spos c; } PosT ; my questions are: 1- do I need to define a class in CSharp representing PosT? 2- how do I pass PosT ***posArray parameter across frm CSharp to C? 3- How do I specify marshaling for it all? I Do appreciate your help

    Read the article

  • How to convert int to char in JSP expression language?

    - by james.bunt
    I need to display incremented single characters to denote footnotes in a table of data in a JSP. In Java I would normally have a char variable and just increment it, or convert an int to a char by casting it (e.g. (char)(i + 97) to convert a 0-based index to a-z). I can't figure out how to do this in expression language short of writing my own JSTL function. Does anyone know how to convert an int to char in EL? Or how to increment a char variable in EL? Or possibly even a better technique to do what I'm trying to do in JSP/EL? Example of what I need to be able to produce: a mydata b myotherdata ... a first footnote b second footnote

    Read the article

  • C: Why does gcc allow char array initialization with string literal larger than array?

    - by Ashwin
    int main() { char a[7] = "Network"; return 0; } A string literal in C is terminated internally with a nul character. So, the above code should give a compilation error since the actual length of the string literal Network is 8 and it cannot fit in a char[7] array. However, gcc (even with -Wall) on Ubuntu compiles this code without any error or warning. Why does gcc allow this and not flag it as compilation error? gcc only gives a warning (still no error!) when the char array size is smaller than the string literal. For example, it warns on: char a[6] = "Network"; [Related] Visual C++ 2012 gives a compilation error for char a[7]: 1>d:\main.cpp(3): error C2117: 'a' : array bounds overflow 1> d:\main.cpp(3) : see declaration of 'a'

    Read the article

  • How to get rid of `deprecated conversion from string constant to ‘char*’` warnings in GCC?

    - by Josh Matthews
    So I'm working on an exceedingly large codebase, and recently upgraded to gcc 4.3, which now triggers this warning: warning: deprecated conversion from string constant to ‘char*’ Obviously, the correct way to fix this is to find every declaration like char *s = "constant string"; or function call like void foo(char *s); foo("constant string"); and make them const char pointers. However, that would mean touching 564 files, minimum, which is not a task I wish to perform at this point in time. The problem right now is that I'm running with -werror, so I need some way to stifle these warnings. How can I do that?

    Read the article

  • C# Reading and Writing a Char[] to and from a Byte[]

    - by Simon G
    Hi, I have a byte array of around 10,000 bytes which is basically a blob from delphi that contains char, string, double and arrays of various types. This need to be read in and updated via C#. I've created a very basic reader that gets the byte array from the db and converts the bytes to the relevant object type when accessing the property which works fine. My problem is when I try to write to a specific char[] item, it doesn't seem to update the byte array. I've created the following extensions for reading and writing: public static class CharExtension { public static byte ToByte( this char c ) { return Convert.ToByte( c ); } public static byte ToByte( this char c, int position, byte[] blob ) { byte b = c.ToByte(); blob[position] = b; return b; } } public static class CharArrayExtension { public static byte[] ToByteArray( this char[] c ) { byte[] b = new byte[c.Length]; for ( int i = 1; i < c.Length; i++ ) { b[i] = c[i].ToByte(); } return b; } public static byte[] ToByteArray( this char[] c, int positon, int length, byte[] blob ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, positon, length ); return b; } } public static class ByteExtension { public static char ToChar( this byte[] b, int position ) { return Convert.ToChar( b[position] ); } } public static class ByteArrayExtension { public static char[] ToCharArray( this byte[] b, int position, int length ) { char[] c = new char[length]; for ( int i = 0; i < length; i++ ) { c[i] = b.ToChar( position ); position += 1; } return c; } } to read and write chars and char arrays my code looks like: Byte[] _Blob; // set from a db field public char ubin { get { return _tariffBlob.ToChar( 14 ); } set { value.ToByte( 14, _Blob ); } } public char[] usercaplas { get { return _tariffBlob.ToCharArray( 2035, 10 ); } set { value.ToByteArray( 2035, 10, _Blob ); } } So to write to the objects I can do: ubin = 'C'; // this will update the byte[] usercaplas = new char[10] { 'A', 'B', etc. }; // this will update the byte[] usercaplas[3] = 'C'; // this does not update the byte[] I know the reason is that the setter property is not being called but I want to know is there a way around this using code similar to what I already have? I know a possible solution is to use a private variable called _usercaplas that I set and update as needed however as the byte array is nearly 10,000 bytes in length the class is already long and I would like a simpler approach as to reduce the overall code length and complexity. Thank

    Read the article

  • friending istream operator with class

    - by user1388172
    hello i'm trying to overload my operator >> to my class but i ecnouter an error in eclipse. code: friend istream& operator>>(const istream& is, const RAngle& ra){ return is >> ra.x >> ra.y; } code2: friend istream& operator>>(const istream& is, const RAngle& ra) { is >> ra.x; is >> ra.y; return is } Both crash and i don't know why, please help. EDIT: ra.x & ra.y are both 2 private ints of my class; Full error: error: ..\/rightangle.h: In function 'std::istream& operator>>(std::istream&, const RAngle&)': ..\/rightangle.h:65:12: error: ambiguous overload for 'operator>>' in 'is >> ra.RAngle::x' ..\/rightangle.h:65:12: note: candidates are: c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__istream_type& (*)(std::basic_istream<_CharT, _Traits>::__istream_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__istream_type& (*)(std::basic_istream<char>::__istream_type&) {aka std::basic_istream<char>& (*)(std::basic_istream<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__ios_type& (*)(std::basic_istream<_CharT, _Traits>::__ios_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>, std::basic_istream<_CharT, _Traits>::__ios_type = std::basic_ios<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__ios_type& (*)(std::basic_istream<char>::__ios_type&) {aka std::basic_ios<char>& (*)(std::basic_ios<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: no known conversion for argument 1 from 'const int' to 'std::ios_base& (*)(std::ios_base&)' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: std::basic_istream<_CharT, _Traits>& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__streambuf_type*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__streambuf_type = std::basic_streambuf<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__streambuf_type* {aka std::basic_streambuf<char>*}' ..\/rightangle.h:66:12: error: ambiguous overload for 'operator>>' in 'is >> ra.RAngle::y' ..\/rightangle.h:66:12: note: candidates are: c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__istream_type& (*)(std::basic_istream<_CharT, _Traits>::__istream_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:122:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__istream_type& (*)(std::basic_istream<char>::__istream_type&) {aka std::basic_istream<char>& (*)(std::basic_istream<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__ios_type& (*)(std::basic_istream<_CharT, _Traits>::__ios_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>, std::basic_istream<_CharT, _Traits>::__ios_type = std::basic_ios<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:126:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__ios_type& (*)(std::basic_istream<char>::__ios_type&) {aka std::basic_ios<char>& (*)(std::basic_ios<char>&)}' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: std::basic_istream<_CharT, _Traits>::__istream_type& std::basic_istream<_CharT, _Traits>::operator>>(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__istream_type = std::basic_istream<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:133:7: note: no known conversion for argument 1 from 'const int' to 'std::ios_base& (*)(std::ios_base&)' c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: std::basic_istream<_CharT, _Traits>& std::basic_istream<_CharT, _Traits>::operator>>(std::basic_istream<_CharT, _Traits>::__streambuf_type*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_istream<_CharT, _Traits>::__streambuf_type = std::basic_streambuf<char>] <near match> c:\mingw\bin\../lib/gcc/mingw32/4.6.1/include/c++/istream:241:7: note: no known conversion for argument 1 from 'const int' to 'std::basic_istream<char>::__streambuf_type* {aka std::basic_streambuf<char>*}''

    Read the article

  • Converting String^ and Collection of String^ to const char*

    - by Jim Jones
    Using VS2008 Managed C++ to wrap a dll. The native method takes a series of single const char* values and a collection of char* values. Going to make an example function: Function1(char * value1, TF_StringList& catList); TF_StringList is a dll class with 3 insert methods, the one I want to use is: TF_StringList::insert(const char* str); So I set up a wrapper method of: WrapperClass::callFunction(String^ mvalue1, ArrayList mcatList); mvalue1 is converted to const char* using: const char* value1 = (char*)(Marshal::StringToHGlobalAnsi(mvalue1)).ToPointer(); However, when a get to the collection of strings, I iterate over it getting each string using the index: String^ mstr = mcatList[i]; Have tried every way of converting String^ to const char* and in every case the TF_StringList::insert(const char* str) method throws a C2663 error which has to do with the const-ness of the value. What is the problem?

    Read the article

  • What's the meaning of 'char (*p)[5];'?

    - by jpmelos
    people. I'm trying to grasp the differences between these three declarations: char p[5]; char *p[5]; char (*p)[5]; I'm trying to find this out by doing some tests, because every guide of reading declarations and stuff like that has not helped me so far. I wrote this little program and it's not working (I've tried other kinds of use of the third declaration and I've ran out of options): #include <stdio.h> #include <string.h> #include <stdlib.h> int main(void) { char p1[5]; char *p2[5]; char (*p3)[5]; strcpy(p1, "dead"); p2[0] = (char *) malloc(5 * sizeof(char)); strcpy(p2[0], "beef"); p3[0] = (char *) malloc(5 * sizeof(char)); strcpy(p3[0], "char"); printf("p1 = %s\np2[0] = %s\np3[0] = %s\n", p1, p2[0], p3[0]); return 0; } The first and second works alright, and I've understood what they do. What is the meaning of the third declaration and the correct way to use it? Thank you!

    Read the article

  • Improve Performance of char.IsWhiteSpace for ASCII inputs in .NET 3.5

    - by Tanzim Saqib
    IsNullOrWhiteSpace is a new method introduced in string class in .NET 4.0. While this is a very useful method in string based processing, I attempted to implement it in .NET 3.5 using char.IsWhiteSpace() . I have found significant performance penalty using this method which I replaced later on, with my version. The following code takes about 20.6074219 seconds in my machine whereas my implementation of char.IsWhiteSpace takes about 1/4 less time 15.8271485 seconds only. In many scenarios ex. string...(read more)

    Read the article

  • cin.getline() equivalent when getting a char from a function.

    - by Aaron
    From what I understand cin.getLine gets the first char(which I think it a pointer) and then gets that the length. I have used it when cin for a char. I have a function that is returning a pointer to the first char in an array. Is there an equivalent to get the rest of the array into a char that I can use the entire array. I explained below what I am trying to do. The function works fine, but if it would help I could post the function. cmd_str[0]=infile();// get the pointer from a function cout<<"pp1>"; cout<< "test1"<<endl; // cin.getline(cmd_str,500);something like this with the array from the function cout<<cmd_str<<endl; this would print out the entire array cout<<"test2"<<endl; length=0; length= shell(cmd_str);// so I could pass it to this function

    Read the article

  • how to pass vector of string to foo(char const *const *const)?

    - by user347208
    Hi, This is my first post so please be nice. I searched in this forum and googled but I still can not find the answer. This problem has bothered me for more than a day, so please give me some help. Thank you. I need to pass a vector of string to a library function foo(char const *const *const). I can not pass the &Vec[0] since it's a pointer to a string. Therefore, I have an array and pass the c_str() to that array. The following is my code (aNames is the vector of string): const char* aR[aNames.size()]; std::transform(aNames.begin(), aNames.end(), aR, boost::bind(&std::string::c_str, _1)); foo(aR); However, it seems it causes some undefined behavior: If I run the above code, then the function foo throw some warnings about illegal characters ('èI' blablabla) in aR. If I print aR before function foo like this: std::copy(aR, aR+rowNames.size(), std::ostream_iterator<const char*>(std::cout, "\n")); foo(aR); Then, everything is fine. My questions are: Does the conversion causes undefined behavior? If so, why? What is the correct way to pass vector of string to foo(char const *const *const)? Thank you very much for your help!

    Read the article

  • C# Reading and Writing a Char[] to and from a Byte[] - Updated with Solution

    - by Simon G
    Hi, I have a byte array of around 10,000 bytes which is basically a blob from delphi that contains char, string, double and arrays of various types. This need to be read in and updated via C#. I've created a very basic reader that gets the byte array from the db and converts the bytes to the relevant object type when accessing the property which works fine. My problem is when I try to write to a specific char[] item, it doesn't seem to update the byte array. I've created the following extensions for reading and writing: public static class CharExtension { public static byte ToByte( this char c ) { return Convert.ToByte( c ); } public static byte ToByte( this char c, int position, byte[] blob ) { byte b = c.ToByte(); blob[position] = b; return b; } } public static class CharArrayExtension { public static byte[] ToByteArray( this char[] c ) { byte[] b = new byte[c.Length]; for ( int i = 1; i < c.Length; i++ ) { b[i] = c[i].ToByte(); } return b; } public static byte[] ToByteArray( this char[] c, int positon, int length, byte[] blob ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, positon, length ); return b; } } public static class ByteExtension { public static char ToChar( this byte[] b, int position ) { return Convert.ToChar( b[position] ); } } public static class ByteArrayExtension { public static char[] ToCharArray( this byte[] b, int position, int length ) { char[] c = new char[length]; for ( int i = 0; i < length; i++ ) { c[i] = b.ToChar( position ); position += 1; } return c; } } to read and write chars and char arrays my code looks like: Byte[] _Blob; // set from a db field public char ubin { get { return _tariffBlob.ToChar( 14 ); } set { value.ToByte( 14, _Blob ); } } public char[] usercaplas { get { return _tariffBlob.ToCharArray( 2035, 10 ); } set { value.ToByteArray( 2035, 10, _Blob ); } } So to write to the objects I can do: ubin = 'C'; // this will update the byte[] usercaplas = new char[10] { 'A', 'B', etc. }; // this will update the byte[] usercaplas[3] = 'C'; // this does not update the byte[] I know the reason is that the setter property is not being called but I want to know is there a way around this using code similar to what I already have? I know a possible solution is to use a private variable called _usercaplas that I set and update as needed however as the byte array is nearly 10,000 bytes in length the class is already long and I would like a simpler approach as to reduce the overall code length and complexity. Thank Solution Here's my solution should anyone want it. If you have a better way of doing then let me know please. First I created a new class for the array: public class CharArrayList : ArrayList { char[] arr; private byte[] blob; private int length = 0; private int position = 0; public CharArrayList( byte[] blob, int position, int length ) { this.blob = blob; this.length = length; this.position = position; PopulateInternalArray(); SetArray(); } private void PopulateInternalArray() { arr = blob.ToCharArray( position, length ); } private void SetArray() { foreach ( char c in arr ) { this.Add( c ); } } private void UpdateInternalArray() { this.Clear(); SetArray(); } public char this[int i] { get { return arr[i]; } set { arr[i] = value; UpdateInternalArray(); } } } Then I created a couple of extension methods to help with converting to a byte[] public static byte[] ToByteArray( this CharArrayList c ) { byte[] b = new byte[c.Count]; for ( int i = 0; i < c.Count; i++ ) { b[i] = Convert.ToChar( c[i] ).ToByte(); } return b; } public static byte[] ToByteArray( this CharArrayList c, byte[] blob, int position, int length ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, position, length ); return b; } So to read and write to the object: private CharArrayList _usercaplass; public CharArrayList usercaplas { get { if ( _usercaplass == null ) _usercaplass = new CharArrayList( _tariffBlob, 2035, 100 ); return _usercaplass; } set { _usercaplass = value; _usercaplass.ToByteArray( _tariffBlob, 2035, 100 ); } } As mentioned before its not an ideal solutions as I have to have private variables and extra code in the setter but I couldnt see a way around it.

    Read the article

  • Why is passing a string literal into a char* arguament only sometimes a compiler error?

    - by Brian Postow
    I'm working in a C, and C++ program. We used to be compiling without the make-strings-writable option. But that was getting a bunch of warnings, so I turned it off. Then I got a whole bunch of errors of the form "Cannot convert const char* to char* in argmuent 3 of function foo". So, I went through and made a whole lot of changes to fix those. However, today, the program CRASHED because the literal "" was getting passed into a function that was expecting a char*, and was setting the 0th character to 0. It wasn't doing anything bad, just trying to edit a constant, and crashing. My question is, why wasn't that a compiler error? In case it matters, this was on a mac compiled with gcc-4.0.

    Read the article

  • "const char *" is incompatible with parameter of type "LPCWSTR" error

    - by N0xus
    I'm trying to incorporate some code from Programming an RTS Game With Direct3D into my game. Before anyone says it, I know the book is kinda old, but it's the particle effects system he creates that I'm trying to use. With his shader class, he intialise it thusly: void SHADER::Init(IDirect3DDevice9 *Dev, const char fName[], int typ) { m_pDevice = Dev; m_type = typ; if(m_pDevice == NULL)return; // Assemble and set the pixel or vertex shader HRESULT hRes; LPD3DXBUFFER Code = NULL; LPD3DXBUFFER ErrorMsgs = NULL; if(m_type == PIXEL_SHADER) hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "ps_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); else hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "vs_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); } How ever, this generates the following error: Error 1 error C2664: 'D3DXCompileShaderFromFileW' : cannot convert parameter 1 from 'const char []' to 'LPCWSTR' The compiler states the issue is with fName in the D3DXCompileShaderFromFile line. I know this has something to do with the character set, and my program was already running with a Unicode Character set on the go. I read that to solve the above problem, I need to switch to a multi-byte character set. But, if I do that, I get other errors in my code, like so: Error 2 error C2664: 'D3DXCreateEffectFromFileA' : cannot convert parameter 2 from 'const wchar_t *' to 'LPCSTR' With it being accredited to the following line of code: if(FAILED(D3DXCreateEffectFromFile(m_pD3DDevice9,effectFileName.c_str(),NULL,NULL,0,NULL,&m_pCurrentEffect,&pErrorBuffer))) This if is nested within another if statement checking my effectmap list. Though it is the FAILED word with the red line. Like wise I get the another error with the following line of code: wstring effectFileName = TEXT("Sky.fx"); With the error message being: Error 1 error C2440: 'initializing' : cannot convert from 'const char [7]' to 'std::basic_string<_Elem,_Traits,_Ax' If I change it back to a Uni code character set, I get the original (fewer) errors. Leaving as a multi-byte, I get more errors. Does anyone know of a way I can fix this issue?

    Read the article

  • Why is the Objective-C Boolean data type defined as a signed char?

    - by EddieCatflap
    Something that has piqued my interest is Objective-C's BOOL type definition. Why is it defined as a signed char (which could cause unexpected behaviour if a value greater than 1 byte in length is assigned to it) rather than as an int, as C does (much less margin for error: a zero value is false, a non-zero value is true)? The only reason I can think of is the Objective-C designers micro-optimising storage because the char will use less memory than the int. Please can someone enlighten me?

    Read the article

  • Initializing a char array through passed pointer segfaults

    - by Bitgarden
    Ie., why does the following work: char* char_array(size_t size){ return new char[size]; } int main(){ const char* foo = "foo"; size_t len = strlen(foo); char* bar=char_array(len); memset(bar, 0, len+1); } But the following segfaults: void char_array(char* out, size_t size){ out= new char[size]; } int main(){ const char* foo = "foo"; size_t len = strlen(foo); char* bar; char_array(bar, len); memset(bar, 0, len+1); }

    Read the article

  • Making a char function parameter const?

    - by Helper Method
    Consider this function declaration: int IndexOf(const char *, char); where char * is a string and char the character to find within the string (returns -1 if the char is not found, otherwise its position). Does it make sense to make the char also const? I always try to use const on pointer parameters but when something is called by value, I normally leave the const away. What are your thoughts?

    Read the article

  • to store char* from function return value

    - by samprat
    Hi folks, I am trying to implement a function which reads from Serial Port ( Linux) and retuns char*. The function works fine but how would I store return value from function. example of function is char *ReadToSerialPort() { char *bufptr; char buffer[256]; // Input buffer/ / //char *bufptr; // Current char in buffer // int nbytes; // Number of bytes read // bufptr = buffer; while ((nbytes = read(fd, bufptr, buffer+sizeof(buffer)-bufptr -1 )) > 0) { bufptr += nbytes; // if (bufptr[-1] == '\n' || bufptr[-1] == '\r') /*if ( bufptr[sizeof(buffer) -1] == '*' && bufptr[0] == '$' ) { break; }*/ } // while ends if ( nbytes ) return bufptr; else return 0; *bufptr = '\0'; } // end ReadAdrPort //In main int main( int argc , char *argv[]) { char *letter; if(strcpy(letter, ReadToSerialPort()) >0 ) { printf("Response is %s\n",letter); } }

    Read the article

  • Error while linking libvorbisfile.dylib into Mac application

    - by computergeek6
    I'm working on a program that loads sounds from Ogg Vorbis files, but whatever I do, the XCode project just doesn't seem to want to link libvorbisfile.a into my program. I keep getting linking errors: "_ov_read", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_clear", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_info", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o "_ov_open", referenced from: GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o GSound::GSound(GWorld*, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)in GSound.o ld: symbol(s) not found

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • how to cast an array of char into a single integer number?

    - by SepiDev
    Hi guys, i'm trying to read contents of PNG file. As you may know, all data is written in a 4-byte manner in png files, both text and numbers. so if we have number 35234 it is save in this way: [1000][1001][1010][0010]. but sometimes numbers are shorter, so the first bytes are zero, and when I read the array and cast it from char* to integer I get wrong number. for example [0000] [0000] [0001] [1011] sometimes numbers are misinterpreted as negative numbers and simetimes as zero! let me give you an intuitive example: char s_num[4] = {120, 80, 40, 1}; int t_num = 0; t_num = int(s_num); => 3215279148 ?????? the result should be 241 but the output is 3215279148? I wish I could explain my problem well! how can i cast such arrays into a single integer value?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >