Search Results

Search found 22043 results on 882 pages for 'int ua'.

Page 11/882 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to marshal int* in C#?

    - by MartyIX
    Hi, I would like to call this method in unmanaged library: void __stdcall GetConstraints( unsigned int* puiMaxWidth, unsigned int* puiMaxHeight, unsigned int* puiMaxBoxes ); My solution: Delegate definition: [UnmanagedFunctionPointer(CallingConvention.StdCall)] private delegate void GetConstraintsDel(UIntPtr puiMaxWidth, UIntPtr puiMaxHeight, UIntPtr puiMaxBoxes); The call of the method: // PLUGIN NAME GetConstraintsDel getConstraints = (GetConstraintsDel)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(GetConstraintsDel)); uint maxWidth, maxHeight, maxBoxes; unsafe { UIntPtr a = new UIntPtr(&maxWidth); UIntPtr b = new UIntPtr(&maxHeight); UIntPtr c = new UIntPtr(&maxBoxes); getConstraints(a, b, c); } It works but I have to allow "unsafe" flag. Is there a solution without unsafe code? Or is this solution ok? I don't quite understand the implications of setting the project with unsafe flag. Thanks for help!

    Read the article

  • DataAnnotations Automatic Handling of int is Causing a Roadblock

    - by DM
    Summary: DataAnnotation's automatic handling of an "int?" is making me rethink using them at all. Maybe I'm missing something and an easy fix but I can't get DataAnnotations to cooperate. I have a public property with my own custom validation attribute: [MustBeNumeric(ErrorMessage = "Must be a number")] public int? Weight { get; set; } The point of the custom validation attribute is do a quick check to see if the input is numeric and display an appropriate error message. The problem is that when DataAnnotations tries to bind a string to the int? is automatically doesn't validate and displays a "The value 'asdf' is not valid for Weight." For the life of me I can't get DataAnnotations to stop handling that so I can take care of it in my custom attribute. This seems like it would be a popular scenario (to validate that the input in numeric) and I'm guessing there's an easy solution but I didn't find it anywhere.

    Read the article

  • Calling cli::array<int>::Reverse via a typedef in C++/CLI

    - by Vulcan Eager
    Here is what I'm trying: typedef cli::array<int> intarray; int main(){ intarray ^ints = gcnew intarray { 0, 1, 2, 3 }; intarray::Reverse(ints); // C2825, C2039, C3149 return 0; } Compilation resulted in the following errors: .\ints.cpp(46) : error C2825: 'intarray': must be a class or namespace when followed by '::' .\ints.cpp(46) : error C2039: 'Reverse' : is not a member of '`global namespace'' .\ints.cpp(46) : error C3149: 'cli::array<Type>' : cannot use this type here without a top-level '^' with [ Type=int ] Am I doing something wrong here?

    Read the article

  • Dataset field DBNull -> int?

    - by BobClegg
    SQLServer int field. Value sometimes null. DataAdapter fills dataset OK and can display data in DatagridView OK. When trying to retrieve the data programmatically from the dataset the Dataset field retrieval code throws a StronglyTypedException error. [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public int curr_reading { get { try { return ((int)(this[this.tableHistory.curr_readingColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'curr_reading\' in table \'History\' is DBNull.", e); } Got past this by checking for DBNull in the get accessor and returning null but... When the dataset structure is modified (Still developing) my changes (unsurprisingly) are gone. What is the best way to handle this situation? It seems I am stuck with dealing with it at the dataset level. Is there some sort of attribute that can tell the auto code generator to leave the changes in place?

    Read the article

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • scanf("%d", char*) - char-as-int format string?

    - by SF.
    What is the format string modifier for char-as-number? I want to read in a number never exceeding 255 (actually much less) into an unsigned char type variable using sscanf. Using the typical char source[] = "x32"; char separator; unsigned char dest; int len; len = sscanf(source,"%c%d",&separator,&dest); // validate and proceed... I'm getting the expected warning: argument 4 of sscanf is type char*, int* expected. As I understand the specs, there is no modifier for char (like %sd for short, or %lld for 64-bit long) is it dangerous? (will overflow just overflow (roll-over) the variable or will it write outside the allocated space?) is there a prettier way to achieve that than allocating a temporary int variable? ...or would you suggest an entirely different approach altogether?

    Read the article

  • int vs size_t on 64bit

    - by MK
    Porting code from 32bit to 64bit. Lots of places with int len = strlen(pstr); These all generate warnings now because strlen() returns size_t which is 64bit and int is still 32bit. So I've been replacing them with size_t len = strlen(pstr); But I just realized that this is not safe, as size_t is unsigned and it can be treated as signed by the code (I actually ran into one case where it caused a problem, thank you, unit tests!). Blindly casting strlen return to (int) feels dirty. Or maybe it shouldn't? So the question is: is there an elegant solution for this? I probably have a thousand lines of code like that in the codebase; I can't manually check each one of them and the test coverage is currently somewhere between 0.01 and 0.001%.

    Read the article

  • SDL_ttf and Numbers (int)

    - by jack moore
    int score = 0; char* fixedscore=(char*)score; . . . imgTxt = TTF_RenderText_Solid( font, fixedscore, fColor ); ^^ This doesn't work - looks like fixedscore is empty or doesn't exists. int score = 0; char* fixedscore=(char*)score; . . . imgTxt = TTF_RenderText_Solid( font, "Works fine", fColor ); ^^ Works fine, but... I guess converting int to char* doesn't really work. So how do you print scores in SDL? Oh and one more thing: why is the text so ugly? Any help would be appreciated. Thanks.

    Read the article

  • [C] Signed Hexadecimal string to long int function

    - by Ben
    I am trying to convert a 24bit Hexadecimal string (6 characters) signed in two's complement to a long int in C. This is the function I have come up with: long int hex2li (char string[]) { char *pEnd; long int result = strtol (string, &pEnd, 16); if (strcmp (pEnd, "") == 0) { if (toupper (string[0]) == 'F') { return result - 16777216; } else { return result; } } return LONG_MIN; } Is it valid? Is there a better way of doing this?

    Read the article

  • Setting minOccurs="0" (required) on web service parameters of type int

    - by Alex Angas
    I have an ASP.NET 2.0 web method with the following signature: [WebMethod] public QueryResult[] GetListData( string url, string list, string query, int noOfItems, string titleField) I'm running the disco.exe tool to generate .wsdl and .disco files from this web service for use in SharePoint. The following WSDL for the parameters is being generated: <s:element minOccurs="0" maxOccurs="1" name="url" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="list" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="query" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="noOfItems" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="titleField" type="s:string" /> Why does the int parameter have minOccurs set to 1 instead of 0 and how do I change it? I've tried using [XmlElementAttribute(IsNullable=false)] in the parameter declaration without success.

    Read the article

  • Int[] Reverse - What does this actually do?

    - by Jamie Dixon
    I was just having a play around with some code in LINQPad and noticed that on an int array there is a Reverse method. Usually when I want to reverse an int array I'd do so with Array.Reverse(myIntArray); Which, given the array {1,2,3,4} would then return 4 as the value of myIntArray[0]. When I used the Reverse() method directly on my int array: myIntArray.Reverse(); I notice that myIntArray[0] still comes out as 1. What is the Reverse method actually doing here?

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • C++ struct containing unsigned char and int bug

    - by powerfear
    Ok i have a struct in my C++ program that is like this: struct thestruct { unsigned char var1; unsigned char var2; unsigned char var3[2]; unsigned char var4; unsigned char var5[8]; int var6; unsigned char var7[4]; }; When i use this struct, 3 random bytes get added before the "var6", if i delete "var5" it's still before "var6" so i know it's always before the "var6". But if i remove the "var6" then the 3 extra bytes are gone. If i only use a struct with a int in it, there is no extra bytes. So there seem to be a conflict between the unsigned char and the int, how can i fix that?

    Read the article

  • Why can't `main` return a double or String rather than int or void?

    - by sunny
    In many languages such as C, C++, and Java, the main method/function has a return type of void or int, but not double or String. What might be the reasons behind that? I know a little bit that we can't do that because main is called by runtime library and it expects some syntax like int main() or int main(int,char**) so we have to stick to that. So my question is: why does main have the type signature that it has, and not a different one?

    Read the article

  • C# Convert negative int to 11 bits

    - by Klemenko
    I need to convert numbers in interval [–1024, 1016]. I'm converting to 11 bits like that: string s = Convert.ToString(value, 2); //Convert to binary in a string int[] bits = s.PadLeft(11, '0') // Add 0's from left .Select(c => int.Parse(c.ToString())) // convert each char to int .ToArray(); // Convert IEnumerable from select to Array This works perfectly for signed integers [0, 1016]. But for negative integers I get 32 bits result. Do you have any idea how to convert negative integers to 11 bits array?

    Read the article

  • Construct a variadic template of unsigned int recursively

    - by Vincent
    I need a tricky thing in a C++ 2011 code. Currently, I have a metafunction of this kind : template<unsigned int N, unsigned int M> static constexpr unsigned int myFunction() This function can generate number based on N and M. I would like to write a metafunction with input N and M, and that will recursively construct a variadic template by decrementing M. For example, by calling this function with M = 3, It will construct a variadic template called List equal to : List... = myFunction<N, 3>, myFunction<N, 2>, myFunction<N, 1>, myFunction<N, 0> How to do that (if it is possible of course) ?

    Read the article

  • Problem using Conditional Operation with Nullable Int

    - by Rajarshi
    A small problem. Any idea guys why this does not work? int? nullableIntVal = (this.Policy == null) ? null : 1; I am trying to return 'null' if the left hand expression is True, else 1. Seems simple but gives compilation error - Type of conditional expression cannot be determined because there is no implicit conversion between 'null' and 'int' If I replace the " ? null : 1 " with any valid int, then there is no problem.

    Read the article

  • int i vs int index etc. Which one is better?

    - by Earlz
    Coming from a C background I've always used int i for generic loop variables. Of course in big nested loops or other complex things I may use a descriptive name but which one had you rather see? int i; for(i=0;i<Controls.Count;i++){ DoStuff(Controls[i]); } or int index; for(index=0;index<Controls.Count;index++){ DoStuff(Controls[index]); } In the current project I am working on there are both of these styles and index being replaced by ndx. Which one is better? Is the i variable too generic? Also what about the other C style names? i, j, k Should all of these be replaced by actual descriptive variables?

    Read the article

  • Bewildering SegFault involving STL sort algorithm.

    - by just_wes
    Hello everybody, I am completely perplexed at a seg fault that I seem to be creating. I have: vector<unsigned int> words; and global variable string input; I define my custom compare function: bool wordncompare(unsigned int f, unsigned int s) { int n = k; while (((f < input.size()) && (s < input.size())) && (input[f] == input[s])) { if ((input[f] == ' ') && (--n == 0)) { return false; } f++; s++; } return true; } When I run the code: sort(words.begin(), words.end()); The program exits smoothly. However, when I run the code: sort(words.begin(), words.end(), wordncompare); I generate a SegFault deep within the STL. The GDB back-trace code looks like this: #0 0x00007ffff7b79893 in std::string::size() const () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so.6 #1 0x0000000000400f3f in wordncompare (f=90, s=0) at text_gen2.cpp:40 #2 0x000000000040188d in std::__unguarded_linear_insert<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, unsigned int, bool (*)(unsigned int, unsigned int)> (__last=..., __val=90, __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1735 #3 0x00000000004018df in std::__unguarded_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1812 #4 0x0000000000402562 in std::__final_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1845 #5 0x0000000000402c20 in std::sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:4822 #6 0x00000000004012d2 in main (argc=1, args=0x7fffffffe0b8) at text_gen2.cpp:70 I have similar code in another program, but in that program I am using a vector instead of vector. For the life of me I can't figure out what I'm doing wrong. Thanks!

    Read the article

  • mysql data type confusion

    - by zen
    So this is more of a generalized question about MySQLs data types. I'd like to store a 5-digit US zip code (zip_code) properly in this example. A county has 10 different cities and 5 different zip codes. city | zip code -------+---------- city 0 | 33333 city 1 | 11111 city 2 | 22222 city 3 | 33333 city 4 | 44444 city 5 | 55555 city 6 | 33333 city 7 | 33333 city 8 | 44444 city 9 | 22222 I would typically structure a table like this as varchar(50), int(5) and not think twice about it. (1) If we wanted to ensure that this table had only one of 5 different zip codes we should use the enum data type, right? Now think of a similar scenario on a much larger scale. In a state, there are five-hundred cities with 418 different zip codes. (2) Should I store 418 zip codes as an enum data type OR as an int and create another table to reference?

    Read the article

  • C# Timer counter in xx.xx.xx format

    - by Darkshadw
    I have a counter that counts up every 1 second and add 1 to an int. Question How can I format my string so the counter would look like this: 00:01:23 Instead of: 123 Things I've tried Things I've tried so far: for (int i = 0; i < 1; i++) { _Counter += 1; labelUpTime.Text = _Counter.ToString(); } My timer's interval is set to: 1000 (so it adds 1 every second). I did read something about string.Format(""), but I don't know if it is applicable. Thanks if you can guide me through this :D!

    Read the article

  • int() error in django views

    - by Hulk
    def displaydata(request): response_dict = {} offset = int(request.GET.get('iDisplayStart')) There is an error as, int() argument must be a string or a number at the above said line (i.e,`request.GET.get('iDisplayStart')) And in the template code, $(document).ready(function() { $.ajaxSetup({ cache: false }); oTable = $('#qp_table').dataTable( { "aoColumns": [ {"sWidth": "5%" }, {"sWidth": "35%" }, {"sWidth": "27%" }, {"sWidth": "15%"}, { "bSortable": false, "sWidth": "0%"}, {"bSortable": false, "sWidth": "0%"} ], "aaSorting": [[0, 'asc']], "bProcessing": true, "bServerSide": true, "sAjaxSource": "/diaplaydata/", "bJQueryUI": true, "sPaginationType": "full_numbers", "bFilter": false, "oLanguage" : { "sZeroRecords": "No data found", "sProcessing" : "Fetching Data" } });

    Read the article

  • Extending Python’s int type to accept only values within a given range

    - by igor
    I would like to create a custom data type which basically behaves like an ordinary int, but with the value restricted to be within a given range. I guess I need some kind of factory function, but I cannot figure out how to do it. myType = MyCustomInt(minimum=7, maximum=49, default=10) i = myType(16) # OK i = myType(52) # raises ValueError i = myType() # i == 10 positiveInt = MyCustomInt(minimum=1) # no maximum restriction negativeInt = MyCustomInt(maximum=-1) # no minimum restriction nonsensicalInt = MyCustomInt() # well, the same as an ordinary int Any hint is appreciated. Thanks!

    Read the article

  • combobox value string to int

    - by asli
    Hi,I have a question about converting types each other.I want to change the selected combobox value string to int but I get errors my code: int.Parse(age.SelectedItem.ToString()); What can I do for this problem?Thank you

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >