Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 110/151 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Suppressing the language select button iPhone

    - by AWinter
    I'm working on an application now that contains an account register section. One field with secureTextEntry = NO (for registering only). The idea is this make registration faster and hopefully increases the number of signups. It's simple enough for me to just place a regular UITextField but if the user has any additional language keyboards then it's possible for the user to enter non-password friendly characters. Unlike in when secureTextEntry = YES. I know I can do textField.keyboardType = UIKeyboardTypeASCIICapable to get the text field to display the ASCII keyboard first, but the user will still have the keyboard switch button which will allow them to get to undesirable characters. Is there a simple method for suppressing the international button or forcing ASCII only keyboard with no international button? [EDIT] Another perhaps better option might be to suppress multi byte keyboards or even to display the text in the case that secureTextEntry = YES any ideas here? [EDIT AGAIN] I've decided it's a really bad idea to suppress the international button as non-multibyte characters should all be allowed.

    Read the article

  • Any difference in compiler behavior for each of these snippets?

    - by HotHead
    Please consider following code: 1. uint16 a = 0x0001; if(a < 0x0002) { // do something } 2. uint16 a = 0x0001; if(a < uint16(0x0002)) { // do something } 3. uint16 a = 0x0001; if(a < static_cast<uint16>(0x0002)) { // do something } 4. uint16 a = 0x0001; uint16 b = 0x0002; if(a < b) { // do something } What compiler does in backgorund and what is the best (and correct) way to do above testing? p.s. sorry, but I couldn't find the better title :) EDIT: values 0x0001 and 0x0002 are only example. There coudl be any 2 byte value instead. Thank you in advance!

    Read the article

  • How do you use Java 1.6 Annotation Processing to perform compile time weaving?

    - by Steve
    I have created an annotation, applied it to a DTO and written a Java 1.6 style annotationProcessor. I can see how to have the annotationProcessor write a new source file, which isn't what I want to do, I cannot see or find out how to have it modify the existing class (ideally just modify the byte code). The modification is actually fairly trivial, all I want the processor to do is to insert a new getter and setter where the name comes from the value of the annotation being processed. My annotation processor looks like this; @SupportedSourceVersion(SourceVersion.RELEASE_6) @SupportedAnnotationTypes({ "com.kn.salog.annotation.AggregateField" }) public class SalogDTOAnnotationProcessor extends AbstractProcessor { @Override public boolean process(final Set<? extends TypeElement> annotations, final RoundEnvironment roundEnv) { //do some stuff } }

    Read the article

  • Something like System.Diagnostics.Process.Start to run a stream

    - by phenevo
    Hi, I get from server images and videos by stream. Now I'm saving it: Stream str = client.GetFile(path); using (var outStream = new FileStream(@"c:\myFile.jpg", FileMode.Create)) { var buffer = new byte[4096]; int count; while ((count = str.Read(buffer, 0, buffer.Length)) > 0) { outStream.Write(buffer, 0, count); } } I can be jpg, mpg, flv and a lot of other multimedia types (Before I get stream I know what is a extension of this file). Now I want to not save it , bu run direct from stream. Is it possible ??

    Read the article

  • C# Socket Server

    - by Snoopy
    In .NET 3.5 a new socket classes was released: http://msdn.microsoft.com/en-us/library/bb968780.aspx i found a sample but some questions regarding best practicses are remaining: http://code.msdn.microsoft.com/nclsamples/Wiki/View.aspx?title=Socket%20Performance m_numConnections (the maximum number of connections the sample is designed to handle simultaneously) is probably equal to the amount of cpu cores i have. m_receiveBufferSize is the size for one packet? like 8kb? how should i handle a length byte? opsToPreAlloc i dont understand. is this if i code a transparent proxy? Regarding the multithreading, what should be used? The reactive extension seem to be a good choice. Has anyone tried this in a real world project? Are there better options? I had bad experiences with the .NET thread pool in the past.

    Read the article

  • Grouping a generic list via LINQ in VB.NET

    - by CD Smith
    I need to take a collection and group it via Linq but all the examples I've seen fail in some manner or other with some syntax difference that I can't quite lick. My collection: Dim a As New List(Of ProcessAlert) a.Add(New ProcessAlert("0000112367", "[email protected]", "Alert", 2)) a.Add(New ProcessAlert("0000112367", "[email protected]", "Document", 2)) a.Add(New ProcessAlert("0000112367", "[email protected]", "Note", 2)) a.Add(New ProcessAlert("0000112367", "[email protected]", "Alert", 1)) a.Add(New ProcessAlert("0000112367", "[email protected]", "Document", 1)) a.Add(New ProcessAlert("0000112367", "[email protected]", "Note", 1)) Return a I need to turn this collection into a simple way to give this final outcome: "[email protected]", "Alert, Document, Note" "[email protected]", "Alert, Document, Note" Here's the definition of the ProcessAlert class: Public Class ProcessAlert Public LoanNumber As String Public EmailAddress As String Public AlertType As String Public AlertMethodID As Byte End Class Thanks in advance, CD

    Read the article

  • Is there any "standard" htonl-like function for 64 bits integers in C++ ?

    - by ereOn
    Hi, I'm working on an implementation of the memcache protocol which, at some points, uses 64 bits integer values. These values must be stored in "network byte order". I wish there was some uint64_t htonll(uint64_t value) function to do the change, but unfortunately, if it exist, I couldn't find it. So I have 1 or 2 questions: Is there any portable (Windows, Linux, AIX) standard function to do this ? If there is no such function, how would you implement it ? I have in mind a basic implementation but I don't know how to check the endianness at compile-time to make the code portable. So your help is more than welcome here ;) Thank you.

    Read the article

  • Login method Customization using GINA [closed]

    - by netseng
    DUPLICATE:http://stackoverflow.com/questions/523912/login-method-customization-using-gina Hi All, I know it's not easy to find a master in GINA, but my question is most near to Interprocess Communication(IPC), I wrote my custom GINA in unmanaged c++, I included it a method that checks for validity of a fingerprint for the user try to login, this function will call some method in a running system windows service written in c#, the code follows: in GINA, unmanaged c++ if(Fingerprint.Validate(userName,finerprintTemplate) { //perform login } in windows service, C# public class Fingerprint { public static bool Validate(string userName, byte[] finerprintTemplate) { //Preform Some code to validate fingerprintTemplate with userName //and retuen result } } Does anyone know how to do such Communication between GINA and the windows service, or simply between c++ written service and C# written service. Thanks

    Read the article

  • Marshal managed string[] to unmanaged char**

    - by Vince
    This is my c++ struct (Use Multi-Byte Character Set) typedef struct hookCONFIG { int threadId; HWND destination; const char** gameApps; const char** profilePaths; } HOOKCONFIG; And .Net struct [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct HOOKCONFIG { public int threadId; public IntPtr destination; // MarshalAs? public string[] gameApps; // MarshalAs? public string[] profilePaths; } I got some problem that how do I marshal the string array? When I access the struct variable "profilePaths" in C++ I got an error like this: An unhandled exception of type 'System.AccessViolationException' occurred in App.exe Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. MessageBox(0, cfg.profilePaths[0], "Title", MB_OK); // error ... Orz

    Read the article

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • problems calling webservices through the https connection

    - by shivaji123
    i have done an application in BlackBerry which takes username & password with url link which is a link of server here i am calling some webservices but it is doing the connection in https so when i take the username password & url link & hit the login button it basically calls a webservice but then the application connecting to the webservice for ever & after some time i get the error massage something "unreported exception the application is not responding" .& then the application crashes out.Also i am using the SOAP client library . this is the piece of code synchronized (this) { try { _httpconn = (HttpConnection) Connector.open(url,Connector.READ_WRITE);//Connector.READ_WRITE //_httpconn =(StreamConnection)Connector.open(url); //System.out.println("-----------httpsconnection() PART--------------------"); _httpconn.setRequestMethod(HttpConnection.POST); //_httpconn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); //System.out.println("-----------httpsconnection() PART- **-------------------"); _httpconn.setRequestProperty("SOAPAction", Constants.EXIST_STR); //System.out.println("-----------httpsconnection() PART-REQUEST -------------------"); _httpconn.setRequestProperty("Content-Type", "text/soap+xml"); //System.out.println("-----------httpsconnection() PART- CONTENT-------------------"); _httpconn.setRequestProperty("User-Agent", "kSOAP/1.0"); //System.out.println("-----------httpsconnection() PART-USER Agent-------------------"); String clen = Integer.toBinaryString(input.length()); _httpconn.setRequestProperty("Content-Length", clen); //System.out.println("-----------httpsconnection() Content-Length--------------------"); _out = _httpconn.openDataOutputStream(); //System.out.println(input+"-----------input--------------------"+url); _out.write(input.getBytes()); _out.flush(); // may or may not be needed. //int rc = _httpconn.getResponseCode(); int rc = _httpconn.getResponseCode(); if(rc == HttpConnection.HTTP_OK) { isComplete = true; _in = _httpconn.openInputStream(); msg = new StringBuffer(); byte[] data = new byte[1024]; int len = 0; int size = 0; while ( -1 != (len = _in.read(data)) ) { msg.append(new String(data, 0, len)); size += len; } responsData = msg.toString(); System.out.println("-----------responsData "+responsData); } if(responsData!=null) isSuccessful = true; stop(); } catch (InterruptedIOException interrIO) { //errStr = "Network Connection hasn't succedded. "+ //"Please check APN setting."; UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { Status.show("Network Connection hasn't succedded. "+ "Please try again later."); } }); isComplete = true; System.out.println(interrIO); stop(); } catch (IOException interrIO) { System.out.println("-----------IO EXCEPTION--------- "+interrIO); //errStr = "Network Connection hasn't succedded. "+ //"Please check APN setting." ; UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { Status.show("Network Connection hasn't succedded. "+ "Please try again later." ); } }); isComplete = true; System.out.println(interrIO); stop(); } catch (Exception e) { System.out.println(e); //errStr = "Unable to connect to the internet at this time. "+ //"Please try again later."; UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { Status.show("Unable to connect to the internet at this time. "+ "Please try again later." ); } }); isComplete = true; stop(); } finally { try { if(_httpconn != null) { _httpconn.close(); _httpconn = null; } if(_in != null) { _in.close(); _in = null; } if(_out != null) { _out.close(); _out = null; } } catch(Exception e) { System.out.println(e); UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { Status.show("Unable to connect to the internet at this time. "+ "Please try again later." ); } }); } } } } can anybody help me out. Thanks in advance

    Read the article

  • C# socket blocking behavior

    - by Gearoid Murphy
    My situation is this : I have a C# tcp socket through which I receive structured messages consisting of a 3 byte header and a variable size payload. The tcp data is routed through a network of tunnels and is occasionally susceptible to fragmentation. The solution to this is to perform a blocking read of 3 bytes for the header and a blocking read of N bytes for the variable size payload (the value of N is in the header). The problem I'm experiencing is that occasionally, the blocking receive operation returns a partial packet. That is, it reads a volume of bytes less than the number I explicitly set in the receive call. After some debugging, it appears that the number of bytes it returns is equal to the number of bytes in the Available property of the socket before the receive op. This behavior is contrary to my expectation. If the socket is blocking and I explicitly set the number of bytes to receive, shouldn't the socket block until it recv's those bytes?, any help, pointers, etc would be much appreciated.

    Read the article

  • Optimize C# Code Fragment

    - by Eric J.
    I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method? Changing the input parameter to p to ulong[] would create a macro inefficiency. static ulong Fetch64(byte[] p, int ofs = 0) { unchecked { ulong result = p[0 + ofs] + ((ulong)p[1 + ofs] << 8) + ((ulong)p[2 + ofs] << 16) + ((ulong)p[3 + ofs] << 24) + ((ulong)p[4 + ofs] << 32) + ((ulong)p[5 + ofs] << 40) + ((ulong)p[6 + ofs] << 48) + ((ulong)p[7 + ofs] << 56); return result; } }

    Read the article

  • How to calculate the correct image size in out pdf using itextsharp ?

    - by MK
    I' am trying to add an image to a pdf using itextsharp, regardless of the image size it always appears to be mapped to a different greater size inside the pdf ? The image I add is 624x500 pixel (DPI:72): And here is a screen of the output pdf: And here is how I created the document: Document document = new Document(); System.IO.MemoryStream stream = new MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(document, stream); document.Open(); System.Drawing.Image pngImage = System.Drawing.Image.FromFile("test.png"); Image pdfImage = Image.GetInstance(pngImage, System.Drawing.Imaging.ImageFormat.Png); document.Add(pdfImage); document.Close(); byte[] buffer = stream.GetBuffer(); FileStream fs = new FileStream("test.pdf", FileMode.Create); fs.Write(buffer, 0, buffer.Length); fs.Close(); Any idea why on how to calculate the correct size ?

    Read the article

  • C/C++ - Convert 24-bit signed integer to float

    - by e-t172
    I'm programming in C++. I need to convert a 24-bit signed integer (stored in a 3-byte array) to float (normalizing to [-1.0,1.0]). The platform is MSVC++ on x86 (which means the input is little-endian). I tried this: float convert(const unsigned char* src) { int i = src[2]; i = (i << 8) | src[1]; i = (i << 8) | src[0]; const float Q = 2.0 / ((1 << 24) - 1.0); return (i + 0.5) * Q; } I'm not entirely sure, but it seems the results I'm getting from this code are incorrect. So, is my code wrong and if so, why?

    Read the article

  • C# web request with POST encoding question

    - by rlandster
    On the MSDN site there is an example of some C# code that shows how to make a web request with POST'ed data. Here is an excerpt of that code: WebRequest request = WebRequest.Create ("http://www.contoso.com/PostAccepter.aspx "); request.Method = "POST"; string postData = "This is a test that posts this string to a Web server."; byte[] byteArray = Encoding.UTF8.GetBytes (postData); // (*) request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream (); dataStream.Write (byteArray, 0, byteArray.Length); dataStream.Close (); WebResponse response = request.GetResponse (); ...more... The line marked (*) is the line that puzzles me. Shouldn't the data be encoded using the UrlEncode function than UTF8? Isn't that what application/x-www-form-urlencoded implies?

    Read the article

  • format specifier for short integer

    - by cateof
    I don't use correctly the format specifiers in C. A few lines of code: int main() { char dest[]="stack"; unsigned short val = 500; char c = 'a'; char* final = (char*) malloc(strlen(dest) + 6); snprintf(final, strlen(dest)+6, "%c%c%hd%c%c%s", c, c, val, c, c, dest); printf("%s\n", final); return 0; } I want my executable to print aa500aastack and not aa500aasta Why I am loosing 2 byte? What is the correct format specifier for an unsighed short integer? thanks.

    Read the article

  • Downloaded chm is blocked, is there a solution?

    - by David Rutten
    CHM files that are downloaded are often tagged as potentially malicious by Windows, which effectively blocks all the html pages inside of it. There's an easy fix (just unblock the file after you download it), but I was wondering if there's a better way to provide unblocked chm files. What if I were to download the chm file (as a byte stream) from our server inside the application, then write all the data to a file on the disk. Would it still be blocked? Is there another/better way still?

    Read the article

  • SHA-256 encryption wrong result in Android

    - by user642966
    I am trying to encrypt 12345 using 1111 as salt using SHA-256 encoding and the answer I get is: 010def5ed854d162aa19309479f3ca44dc7563232ff072d1c87bd85943d0e930 which is not same as the value returned by this site: http://hash.online-convert.com/sha256-generator Here's the code snippet: public String getHashValue(String entity, String salt){ byte[] hashValue = null; try { MessageDigest digest = MessageDigest.getInstance("SHA-256"); digest.update(entity.getBytes("UTF-8")); digest.update(salt.getBytes("UTF-8")); hashValue = digest.digest(); } catch (NoSuchAlgorithmException e) { Log.i(TAG, "Exception "+e.getMessage()); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return BasicUtil.byteArrayToHexString(hashValue); } I have verified my printing method with a sample from SO and result is fine. Can someone tell me what's wrong here? And just to clarify - when I encrypt same value & salt in iOS code, the returned value is same as the value given by the converting site.

    Read the article

  • Translate from Java to C#: simple code to re-encode a string

    - by Dr. Zim
    We were sent this formula to encrypt a string written in Java: String myInput = "test1234"; MessageDigest md = MessageDigest.getInstance("SHA"); byte[] myD = md.digest(myInput.getBytes()); BASE64Encoder en64 = new BASE64Encoder(); String myOutput = new String ( Java.net.URLEncoder.encode( en64.encode(myD))); // myOutput becomes "F009U%2Bx99bVTGwS3cQdHf%2BJcpCo%3D" Our attempt at writing this in C# is: System.Security.Cryptography.SHA1 sha1 = new System.Security.Cryptography.SHA1CryptoServiceProvider(); string myOutput = HttpUtility.UrlEncode( Convert.ToBase64String( sha1.ComputeHash( ASCIIEncoding.Default.GetBytes(myInput)))); However the output is no where near the same. It doesn't even have percent signs in it. Any chance anyone would know where we are going wrong?

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • Why is conversion from UTF-8 to ISO-8859-1 not the same in Windows and Linux?

    - by user1895307
    I have the following in code to convert from UTF-8 to ISO-8859-1 in a jar file and when I execute this jar in Windows I get one result and in CentOS I get another. Might anyone know why? public static void main(String[] args) { try { String x = "Ä, ä, É, é, Ö, ö, Ü, ü, ß, «, »"; Charset utf8charset = Charset.forName("UTF-8"); Charset iso88591charset = Charset.forName("ISO-8859-1"); ByteBuffer inputBuffer = ByteBuffer.wrap(x.getBytes()); CharBuffer data = utf8charset.decode(inputBuffer); ByteBuffer outputBuffer = iso88591charset.encode(data); byte[] outputData = outputBuffer.array(); String z = new String(outputData); System.out.println(z); } catch(Exception e) { System.out.println(e.getMessage()); } } In Windows, java -jar test.jar test.txt creates a file containing: Ä, ä, É, é, Ö, ö, Ü, ü, ß, «, » but in CentOS I get: ??, ä, ??, é, ??, ö, ??, ü, ??, «, » Help please!

    Read the article

  • String codification to Twitter

    - by Miguel Ribeiro
    I'm developing a program that sends tweets. I have this piece of code: StringBuilder sb = new StringBuilder("Recomendo "); sb.append(lblName.getText()); sb.append(" no canal "+lblCanal.getText()); sb.append(" no dia "+date[2]+"/"+date[1]+"/"+date[0]); sb.append(" às "+time[0]+"h"+time[1]); byte[] defaultStrBytes = sb.toString().getBytes("ISO-8859-1"); String encodedString = new String(defaultStrBytes, "UTF-8"); But When I send it to tweet I get the "?" symbol or other strage characters because of the accents like "à" . I've also tried with only String encodedString = new String(sb.toString().getBytes(), "UTF-8"); //also tried with ISO-8859-1 but the problem remains...

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • InternetReadFile() corrupting downloads in C

    - by Lienau
    I'm able to download text documents (.html, .txt, etc) but I can't download images or exe's. I'm pretty sure that this is because I'm using a char, and those files are binary. I know that in C# I would use a byte. But what data-type would I use in this case? char buffer[1]; DWORD dwRead; FILE * pFile; pFile = fopen(file,"w"); while (InternetReadFile(hRequest, buffer, 1, &dwRead)) { if(dwRead != 1) break; fprintf(pFile,"%s",buffer); } fclose(pFile);

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >